OpenAI's co-founder and chief scientist departs to build Safe Superintelligence Inc.
Blog PostIn May 2024, Ilya Sutskever departed OpenAI and announced Safe Superintelligence Inc. (SSI), a company focused exclusively on building safe superintelligent AI with no products, no revenue pressure, and no distractions. SSI raised $1B at a $5B valuation by September 2024. Sutskever's departure, combined with Jan Leike's simultaneous exit citing safety culture concerns, represented the most significant talent loss in OpenAI's history.
Sutskever didn't leave for a competitor or for retirement. He left because he believed the organizational structure of a commercial AI lab was fundamentally incompatible with building safe superintelligence. His departure was a statement that the problem of AI safety required a different kind of institution — one without the pressure to ship products, grow revenue, or satisfy investors.
Safe Superintelligence Inc. was founded with an explicitly narrow mission: solve superintelligence safety. No products, no API, no revenue targets. Sutskever, Daniel Gross, and Daniel Levy designed SSI to insulate researchers from commercial incentives entirely. The $1B raise at $5B valuation (September 2024) demonstrated that investors were willing to fund pure safety research at scale — or at least bet on Ilya.
Jan Leike's resignation on the same day — and his public statement that "over the past years, safety culture and processes have taken a back seat to shiny products" — amplified the signal. OpenAI's Superalignment team, announced with great fanfare in July 2023 with 20% of compute pledged, had been starved of resources. Leike went to Anthropic. The dual departure suggested OpenAI's safety apparatus was failing.