SS
About Me
Frontier AI Paper BriefingsPokebowlClinical Trial EnrollerLittle Human Names
DisclaimersPrivacy PolicyTerms of Use
Privacy Policy·Terms of Use·Disclaimers

© 2026 Silvia Seceleanu

← Back to Explorer
Strategy & Governance·OpenAI·May 2024

★40. Ilya Sutskever's Departure and the Founding of SSI

OpenAI's co-founder and chief scientist departs to build Safe Superintelligence Inc.

Blog Post
Summary

In May 2024, Ilya Sutskever departed OpenAI and announced Safe Superintelligence Inc. (SSI), a company focused exclusively on building safe superintelligent AI with no products, no revenue pressure, and no distractions. SSI raised $1B at a $5B valuation by September 2024. Sutskever's departure, combined with Jan Leike's simultaneous exit citing safety culture concerns, represented the most significant talent loss in OpenAI's history.

Key Concepts

The safety-commerce tension made personal: Ilya's philosophical departure

Sutskever didn't leave for a competitor or for retirement. He left because he believed the organizational structure of a commercial AI lab was fundamentally incompatible with building safe superintelligence. His departure was a statement that the problem of AI safety required a different kind of institution — one without the pressure to ship products, grow revenue, or satisfy investors.

SSI: a company with one goal and no products

Safe Superintelligence Inc. was founded with an explicitly narrow mission: solve superintelligence safety. No products, no API, no revenue targets. Sutskever, Daniel Gross, and Daniel Levy designed SSI to insulate researchers from commercial incentives entirely. The $1B raise at $5B valuation (September 2024) demonstrated that investors were willing to fund pure safety research at scale — or at least bet on Ilya.

The Superalignment team collapse: Leike's public departure

Jan Leike's resignation on the same day — and his public statement that "over the past years, safety culture and processes have taken a back seat to shiny products" — amplified the signal. OpenAI's Superalignment team, announced with great fanfare in July 2023 with 20% of compute pledged, had been starved of resources. Leike went to Anthropic. The dual departure suggested OpenAI's safety apparatus was failing.