Now open
For researchers, practitioners, and small teams doing concrete work to reduce existential risk from AI, including technical safety, governance, field-building, and evaluations.
What we fund
We are open to a wide range of approaches. The bar is whether your work plausibly reduces existential risk from advanced AI, not which specific lane it lives in.
Alignment, interpretability, control, evaluations, and other empirical or theoretical work to make advanced AI systems safer.
Research, advocacy, and institution-building that helps governments and labs make better decisions about frontier AI.
Programs, fellowships, communities, and content that bring talented people into AI safety work.
Benchmarks, datasets, tools, and shared infrastructure that the rest of the safety ecosystem can build on.
The details
Grant size
Most grants will be smaller, practical bets. Larger requests are still welcome when the scope justifies it.
Total round
We expect to fund many projects in this launch round, with room to extend follow-on funding for the strongest grantees.
Timeline
We aim for first decisions within four weeks of submission. Reviewers may follow up with questions before deciding.
Process
Step 1
Tell us who you are, what you have worked on, and how to reach you. Start your profile.
Step 2
A few hundred words on your project, a budget breakdown, and the outcomes you are aiming for. No fifty-page proposals.
Step 3
Initial decisions land within weeks. Reviewers may follow up with questions before making a final call.
Who reviews applications
Applications are reviewed by a panel of AI safety practitioners and grantmakers who have evaluated proposals across the existing funding landscape. We partner with Manifund for fiscal sponsorship, so funded projects can receive grants without setting up their own legal entity.
We're partnering with Manifund to distribute $1M in grants for work that reduces existential risk from AI.
Expect a short application, fast turnaround, and a real human reading what you wrote.
Common questions