Science of Trustworthy AI
Science of Trustworthy AI is a program of Schmidt Sciences, the philanthropic organization founded by Eric and Wendy Schmidt. The program addresses a critical gap: despite rapid AI progress, researchers lack a rigorous scientific understanding of what makes AI systems trustworthy and safe. It funds basic technical research organized around three interconnected aims: characterizing and forecasting misalignment in frontier AI systems, developing generalizable measurements and interventions with predictive validity, and extending oversight to regimes where AI capabilities exceed human ability to directly evaluate correctness. The program supports individual researchers, research teams, and institutions globally through grants ranging from up to $1M (Tier 1) to $1M-$5M+ (Tier 2) for one-to-three year projects, and provides additional support including GPU compute, software engineering, and frontier model API access.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The program operates on the theory that frontier AI development currently resembles 'alchemy more than a mature science' — meaning that unsafe AI behaviors and failures are addressed in ad hoc ways that do not generalize. By funding basic research to build a rigorous science of AI evaluation and safety, the program aims to create measurement tools, interventions, and oversight frameworks that are robust across model families, capability regimes, and deployment contexts. The causal chain is: fund academic and nonprofit researchers to develop generalizable safety science → produce reliable evaluations and interventions → enable AI developers and policymakers to deploy frontier systems trustworthily → reduce the probability of catastrophic misalignment or misuse as AI capabilities increase.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 19, 2026, 10:31 PM UTC