SaferAI is a governance and research nonprofit based in Paris, France, focused on incentivizing responsible AI development through quantitative risk modeling, corporate accountability mechanisms, and technical standards. The organization independently evaluates leading AI companies' risk management practices through its public ratings system, develops quantitative models that translate AI capabilities into real-world risk assessments (with particular focus on cyber risk, CBRN threats, and loss of control), and actively contributes to AI governance standards including the EU AI Act Code of Practice, ISO/IEC and CEN-CENELEC standards, and the OECD G7 Hiroshima AI Process reporting framework.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
SaferAI operates on the theory that establishing rigorous, quantitative risk management infrastructure for AI can make regulation enforceable and incentivize responsible development practices. By creating transparent, independent ratings of AI companies' safety practices, they generate public accountability pressure and provide actionable information to policymakers, investors, and AI users. By contributing to international technical standards and policy frameworks (EU AI Act, ISO, OECD), they help ensure that safety requirements are concrete, measurable, and embedded in enforceable regulation. Their quantitative risk models aim to translate abstract AI capability concerns into specific, measurable harm assessments, bridging the gap between AI safety research and practical risk management that industry and regulators can act on.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:01 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
