As a seasoned crypto investor and tech enthusiast with a keen interest in the intersection of artificial intelligence (AI) and blockchain, I am particularly intrigued by the recent announcement from Ilya Sutskever, Daniel Levy, and Daniel Gross regarding their new company, Safe Superintelligence, Inc. (SSI). Their commitment to developing AI safety and capabilities simultaneously is a welcome breath of fresh air in an increasingly complex and risky landscape.
Ilya Sutskever, previously the chief science officer at OpenAI, and his colleagues Daniel Levy, a former engineer at OpenAI, and investor Daniel Gross, have established Safe Superintelligence, Inc. (SSI) with the objective of concurrently advancing AI safety and capabilities.
Headquartered in Palo Alto and Tel Aviv, SSI is committed to driving artificial intelligence (AI) innovation, with a strong emphasis on prioritizing safety in all its endeavors. The company’s founders underscore the importance of this singular focus on safety, ensuring that advancements are not influenced by short-term commercial pressures.
I am starting a new company:
— Ilya Sutskever (@ilyasut) June 19, 2024
Ilya Sutskever expressed that having a sole concentration allows us to avoid management distractions and product cycle interruptions. Additionally, our business structure ensures that safety, security, and advancement remain shielded from short-term market pressures.
On May 14, Sutskever parted ways with OpenAI following his involvement in the termination of CEO Sam Altman’s tenure and his resignation from the board. A short time afterward, Daniel Levy also departed from OpenAI.
The duo had been members of OpenAI’s Superalignment unit, established in July 2023 with the mission to manage artificially intelligent systems surpassing human intelligence, called artificial general intelligence (AGI). Unfortunately, OpenAI disbanded this team following Sutskever and other researchers’ departures.
Vitalik Buterin, one of Ethereum‘s co-founders, considers Artificial General Intelligence (AGI) potentially dangerous but believes the danger is outweighed by risks from corporate greed and military misuse. Meanwhile, approximately 2,600 tech industry leaders, including Elon Musk, CEO of Tesla, and Steve Wozniak, Apple’s co-founder, are advocating for a half-year halt in AI development to assess its significant risks.
As a responsible crypto investor, I value the efforts of companies that prioritize safety in their development of AI capabilities above all else. This commitment to ethical considerations in advancing AI technologies is both admirable and essential for the future of our industry.
Read More
Sorry. No data so far.
2024-06-20 06:44