Grantmaking.ai is the open evaluation platform for AI safety funding. Every opportunity, review, and signal in one place. Letting new funders participate without starting from scratch.
Share your thoughts - comment on our design doc
AI lab employees and newly motivated donors have capital to deploy. But the current system wasn't built for them.
Discovery is insider-only
The best opportunities are hidden behind personal networks. It can take months of time and effort to break in.
Research is duplicated
Every funder asks the same questions. Every grantee answers them again.
Information is stale
An org's website says “seeking funding” but they closed their round six months ago. You can't know without asking.
Signal is invisible
What do people you trust think about a project? That lives in private conversations and the heads of experienced grantmakers.
See what experienced funders and domain experts think — and why. Reviews, picks, and community signal, all in one place.
Public reviews
See who said what. Weight signal by source, not volume.
Regrantor picks
Follow trusted evaluators. See what they recommend and why.
Public track records
Evaluators build reputation. Good judgment gets recognized over time.
Everything you need to move capital for high impact.
Orgs, researchers, projects, funds, and grantmakers — searchable, browsable, always growing.
Quick scan, one-pager, and deep-dive docs. Team backgrounds, contacts, and theories of change.
Runway, burn rate, and funder pipeline — with clear timestamps so you know what's current.
We build your profile from public sources so you don't have to start from zero. Claim it, add context, and stop repeating yourself to every funder.
Step 1
We find you
Your profile is auto-created from public data — grants, publications, and web presence.
Step 2
You claim it
Verify ownership, correct anything, and add the details only you know.
Step 3
Funders find you
Visible to every funder on the platform. No more cold emails or network-dependent discovery.
A multi-year track record shipping tools for the AI safety community.
Anchor-funded
Committed capital to distribute through the platform at launch.
Open & nonprofit
Grant-funded infrastructure, not a business. All public data stays public.
Community-shaped
Designed with funders, regrantors, and grantees from the start.
Explore a fast preview of the organizations working to make AI safer.
Previewing 533 organizations and 1,503 grants from the broader database.
5050 is a free 12-14 week company-builder program run by Fifty Years that helps scientists, researchers, and engineers become deep-tech startup founders, with a dedicated AI safety track.
80,000 Hours is a nonprofit that provides free research, career advice, and a job board to help people find careers that effectively tackle the world's most pressing problems, with a current focus on AI safety.
AIES is a peer-reviewed academic conference series jointly organized by AAAI and ACM that brings together a multidisciplinary community to examine the ethical, social, and policy dimensions of artificial intelligence.
The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) is a premier peer-reviewed academic conference that brings together researchers and practitioners to investigate fairness, accountability, and transparency in socio-technical systems.
ACX Atlanta (The Atlanta Moloch Slayers) is a monthly in-person meetup group for rationalists and readers of the Slate Star Codex and Astral Codex Ten blogs in Atlanta, Georgia.
Adam Jermyn is a physicist and AI safety researcher at Anthropic, working on neural network interpretability and inner alignment. He previously conducted independent AI alignment research after transitioning from a career in computational astrophysics.
ARIA is a UK government research funding agency that backs high-risk, high-reward R&D in underexplored areas, including a major £59 million programme on formal mathematical safety guarantees for AI systems.
AE Studio is a bootstrapped technology studio and AI alignment research organization that funds neglected safety research from its software consulting profits. Their work spans brain-computer interfaces, self-other overlap fine-tuning to reduce LLM deception, and consciousness research.
Aether is an independent research lab focused on LLM agent safety, conducting technical research on the alignment, control, and evaluation of large language model agents.
Votes and reviews from funders and domain experts, with visible identity so you can weigh the source.
Investor-style updates from funded projects. One place to track how the work is actually going.
Pull any public data programmatically. Run your own analysis, build your own tools, plug in agents.