
Center for AI Safety
The Center for AI Safety (CAIS) is a San Francisco-based nonprofit founded in 2022 by Dan Hendrycks and Oliver Zhang. CAIS operates across three pillars: conducting technical and conceptual AI safety research, building the AI safety research field through compute infrastructure, fellowships, courses, and competitions, and advocating for AI safety standards with policymakers and the public. The organization provides free access to a GPU compute cluster supporting hundreds of researchers, publishes foundational benchmarks and safety methods at top ML conferences, and runs educational programs including a widely-used AI safety textbook and course.
Funding Details
- Annual Budget
- $7,163,607
- Monthly Burn Rate
- $596,967
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $32,984,336
- Fiscal Sponsor
- -
Theory of Change
CAIS believes that reducing societal-scale AI risk requires a multi-pronged approach. Through technical research, they develop benchmarks, safeguards, and safety methods that make AI systems more reliable and resistant to misuse, providing the scientific foundations the community needs to address safety challenges. Through field-building, they lower barriers to entry for AI safety research by providing free compute infrastructure, running educational programs, and organizing competitions, thereby expanding the number of researchers working on AI safety. Through advocacy, they raise public awareness, advise policymakers, and promote safety standards, ensuring that governance keeps pace with AI capabilities. The causal chain runs from producing concrete safety tools and knowledge, to growing the talent pipeline, to shaping policy and norms, all aimed at ensuring AI development proceeds in ways that reduce catastrophic and existential risk.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Open Philanthropy
Projects
AI Frontiers is a publication run by the Center for AI Safety that hosts expert commentary and debate on the societal impacts of artificial intelligence, covering topics from AI safety to policy, economics, and national security.
A free newsletter by the Center for AI Safety covering the latest developments in AI safety research, policy, and industry news. No technical background required.
AI Safety, Ethics and Society (AISES) is an open educational project of the Center for AI Safety that provides a free textbook and virtual course introducing AI safety, ethics, and societal risks to a broad, non-technical audience.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:09 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC