Pour Demain ("For Tomorrow") is an independent Swiss think tank founded in 2021 that addresses neglected policy challenges including artificial intelligence safety and governance, biosecurity, and the convergence of synthetic biology with AI. The organization develops scientifically grounded policy recommendations and mediates between science, politics, and civil society. Its AI work includes organizing the Swiss AI Safety Prize, contributing extensively to the EU Code of Practice for General-Purpose AI models, and publishing policy briefs on topics such as resourcing the EU AI Office. Pour Demain maintains offices in Basel, Switzerland and Brussels, Belgium, and operates with a network of over 200 professionals and an approximately 25-member scientific advisory board.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Pour Demain believes that today's most pressing technological challenges, particularly advanced AI and biosecurity risks, are neglected by existing policy institutions and require evidence-based policy interventions to ensure safe development. Their theory of change operates through several channels: (1) developing concrete, science-based policy proposals that translate technical AI safety and biosecurity research into actionable governance recommendations, (2) bridging the gap between scientists, policymakers, and civil society to ensure that safety considerations are integrated into technology regulation, (3) directly engaging in major regulatory processes such as the EU AI Act Code of Practice to shape binding standards for AI model providers, (4) raising public and policymaker awareness of AI safety risks through initiatives like the AI Safety Prize, and (5) advocating for adequate institutional resources (such as proper staffing and budget for the EU AI Office) to enforce AI safety regulations effectively. By working at the Swiss, EU, and international levels, Pour Demain aims to establish transparent audits and binding standards for major AI model providers, analogous to safety frameworks in pharmaceuticals and aviation, thereby reducing risks from advanced AI systems and emerging biotechnologies.
Grants Received
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:48 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC