Safe Superintelligence Inc. (SSI) was founded in June 2024 by former OpenAI Chief Scientist Ilya Sutskever, alongside Daniel Gross and Daniel Levy, with a singular mission: to achieve safe superintelligence. The company deliberately avoids building intermediate products, positioning itself as 'the world's first straight-shot SSI lab.' SSI tackles safety and capabilities as tandem technical problems to be solved through scientific breakthroughs, insulated from short-term commercial pressures. Operating with a lean team across Palo Alto and Tel Aviv, SSI has raised $3 billion from top-tier investors including Greenoaks Capital, Andreessen Horowitz, Sequoia Capital, Alphabet, and Nvidia.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $3,000,000,000
- Fiscal Sponsor
- -
Theory of Change
SSI believes that the primary risk from advanced AI comes from pursuing capabilities while treating safety as secondary or as a constraint on progress. Their theory of change is that by building a small, world-class team that treats safety and capabilities as equally important technical problems to be solved simultaneously — and by completely insulating that work from commercial pressures and product cycles — it is possible to reach superintelligence in a way that is inherently safe rather than retrofitting safety onto a powerful system. Full autonomy from investor pressure to ship products is seen as a prerequisite for this approach to work.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:07 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC
