University of California, Berkeley
The University of California, Berkeley is home to some of the most influential AI safety research programs in the world. Its Center for Human-Compatible Artificial Intelligence (CHAI), founded in 2016 by Stuart Russell and colleagues, focuses on value alignment and building AI systems that are provably beneficial to humans. Berkeley AI Research (BAIR) — with over 50 faculty and 300+ graduate students — conducts foundational work across machine learning, robotics, and responsible AI. The Center for Long-Term Cybersecurity (CLTC) and Berkeley RDI complement these efforts with policy, governance, and responsible AI research.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $20,000,000
- Fiscal Sponsor
- -
Theory of Change
UC Berkeley's AI safety programs collectively operate on the theory that reducing catastrophic and existential risks from advanced AI requires both foundational technical research and engagement with policy, governance, and societal values. CHAI specifically pursues a technical theory of change: by developing value alignment methods (especially inverse reinforcement learning and cooperative AI), AI systems can be built that defer to human preferences rather than pursuing arbitrary objectives, making them inherently safer as capabilities scale. BAIR and CLTC add complementary paths — training the next generation of AI researchers with safety-conscious practices, setting AI risk management standards, and influencing AI governance frameworks. The concentration of world-class researchers at Berkeley also enables field-building: by attracting and training students who go on to work at AI labs, governments, and other universities, Berkeley multiplies its impact across the broader AI ecosystem.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 3, 2026, 1:17 AM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC