Just one month after his departure from OpenAI, Ilya Sutskever, a prominent figure in the field of AI and co-founder of OpenAI, has unveiled his new venture: Safe Superintelligence Inc. (SSI).
“I am starting a new company,” Sutskever wrote in a post on X. “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
During his tenure at OpenAI, Sutskever had served as the chief scientist and co-led the Superalignment team alongside Jan Leike. This team was dedicated to the crucial task of steering and controlling advanced AI systems. However, internal conflicts regarding the company’s approach to AI safety led to the dissolution of the team and the departure of both Sutskever and Leike. Sutskever’s exit came after a controversial attempt to oust OpenAI CEO Sam Altman last year (during a tumultuous time at the AI firm). In the wake of these events, Sutskever announced the formation of SSI, a company with a focus on the development of safe superintelligent AI.
Joining him in this endeavor are Daniel Gross, former AI lead at Apple, and Daniel Levy, an ex-OpenAI engineer. SSI is based in Palo Alto, California, with additional offices in Tel Aviv, Israel. The founding team of SSI brings a wealth of experience and expertise. Daniel Gross has a notable background in AI, having led Apple’s AI and search efforts. Daniel Levy’s technical contributions at OpenAI further strengthen the team’s capabilities.
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,” SSI noted in a post on X. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” it added.
The development of SSI comes at a crucial time when the hype over AI is increasing by the day, and more and more companies are driving investments into the sector. This raises concerns over the potential risks AI (and superintelligence) bring to the table and has the potential to cause grave harm if it is not carefully developed. Safe AI research can mitigate these risks by developing safeguards and ensuring AI systems operate within ethical frameworks, and even eliminate biases in training data (when applicable). Not to mention the fact that malicious actors could exploit vulnerabilities to misuse AI for criminal activities or disrupt critical infrastructure, and safe superintelligence can focus on strong cybersecurity measures to protect AI systems from attacks and ensure their safe and reliable operation.