Stanford University
Stanford University, founded in 1885 and located in Stanford, California, is one of the world's premier research institutions. Within the university, several programs are directly relevant to AI safety and existential risk reduction: the Institute for Human-Centered AI (HAI, founded 2019) advances responsible AI research, education, and policy across all seven of Stanford's schools; the Stanford Existential Risks Initiative (SERI, founded 2020) hosts and promotes scholarship on catastrophic risks from AI, pandemics, nuclear war, and climate change; the Center for International Security and Cooperation (CISAC, founded 1983) conducts research on the AI-nuclear nexus, cyber threats, and autonomous weapons; and the Stanford Center for AI Safety (founded 2020) develops rigorous techniques for building safe and trustworthy AI systems. Stanford AI Alignment (SAIA), an initiative under SERI, focuses on building a pipeline of AI safety researchers and practitioners from the student body.
Funding Details
- Annual Budget
- $9,500,000,000
- Monthly Burn Rate
- $791,666,667
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Stanford's AI safety-relevant programs operate through multiple complementary pathways. HAI seeks to ensure AI development is guided by human values and societal considerations by embedding safety and ethics into mainstream AI research and influencing policy through convenings and publications like the AI Index. SERI bets on academic scholarship and student talent development: by funding rigorous research on existential risks and training a new generation of researchers (via fellowships, working groups, and SAIA's talent pipeline programs), it aims to build the intellectual and human capital needed to avert catastrophes. CISAC focuses on reducing risks at the intersection of AI and geopolitical security by informing policymakers and conducting Track II diplomacy on AI and nuclear risks. The Stanford Center for AI Safety focuses on developing technically rigorous methods to make AI systems provably safer. Together these programs operate on the theory that elite academic research, policy engagement, and talent development at a major research university provide unique leverage over AI's long-term trajectory.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:53 PM UTC
- Created
- Mar 19, 2026, 10:42 PM UTC