Ilya Sutskever Launches Safe Superintelligence to Prioritize AI Safety

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched a new artificial intelligence (AI) company named Safe Superintelligence Inc. (SSI). This new venture aims to create superintelligent AI systems that are safe and aligned with human values, addressing concerns about the potential dangers of advanced AI. SSI’s formation follows Sutskever’s departure from OpenAI in May, marking a significant step in his ongoing commitment to AI safety.

SSI’s mission is to develop superintelligence that surpasses human intelligence while ensuring it operates safely and responsibly. The company’s core focus is clear: “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” stated Sutskever in an announcement on X. This singular dedication to safety aims to insulate the company from the commercial pressures and distractions that larger tech firms often face.

Sutskever co-founded SSI with Daniel Gross, a former AI lead at Apple, and Daniel Levy, an ex-OpenAI engineer. The company has established offices in Palo Alto, California, and Tel Aviv, Israel, and is currently recruiting technical talent to support its ambitious goals. Unlike OpenAI, which transitioned to a for-profit model to fund its extensive computing needs, SSI is structured from the outset as a for-profit entity with a distinct focus on safety over short-term commercial gains.

The launch of SSI comes after a tumultuous period at OpenAI. Sutskever was involved in a dramatic attempt to oust OpenAI CEO Sam Altman in November 2023, citing concerns over the company’s safety practices. Following Altman’s reinstatement, Sutskever and several other key researchers, including Jan Leike, who co-led OpenAI’s Superalignment team, left the company. This team was dedicated to ensuring that AI systems remained safe as they grew more advanced. Leike has since joined Anthropic, another AI firm founded by former OpenAI researchers concerned about AI safety.

SSI’s approach to AI development emphasizes advancing capabilities while ensuring safety remains a priority. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,” SSI announced. This strategy aims to avoid the pitfalls that can arise when commercial pressures drive the development of AI technologies at the expense of safety.

The focus on safety at SSI addresses broader concerns in the AI community. Experts, including Elon Musk and Steve Wozniak, have expressed fears about the potential risks of unchecked AI development. They advocate for careful consideration and regulation to prevent harmful outcomes. Sutskever’s new venture aims to contribute to this effort by developing AI that is both powerful and safe.

Sutskever’s departure from OpenAI and the subsequent formation of SSI highlight ongoing debates within the AI field about the balance between innovation and safety. At OpenAI, the Superalignment team, which Sutskever co-led, was tasked with ensuring that AI systems did not pose existential risks to humanity. However, following Sutskever’s exit, OpenAI disbanded this team, prompting further departures and criticism from within the organization.

Image credit: HAI/Stanford