AI: Futures and Responsibility Programme
The AI: Futures and Responsibility Programme (AI:FAR) is a joint initiative of the Leverhulme Centre for the Future of Intelligence (LCFI) and the Centre for the Study of Existential Risk (CSER) at the University of Cambridge. Directed by Dr Sean O hEigeartaigh, the programme aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity. Its research spans three main themes: futures and foresight (analyzing how AI developments could create lasting societal consequences), governance, ethics and responsible innovation (developing practical policy interventions), and safety, security and risk (addressing AI-related vulnerabilities). The programme works with stakeholders in academia, policy, industry, and civil society to translate research into actionable governance frameworks.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- University of Cambridge (via LCFI and CSER)
Theory of Change
AI:FAR's theory of change operates through three connected mechanisms. First, by conducting rigorous foresight research on AI development trajectories, they identify underexplored risk scenarios and potential societal impacts before they materialize, enabling proactive rather than reactive governance. Second, by developing practical interventions and governance frameworks that are robust across a range of uncertain futures, they aim to ensure that safety measures remain effective regardless of which specific AI development path unfolds. Third, by actively partnering with policymakers, industry leaders, and international bodies (including the OECD, United Nations, and national governments), they translate academic research into real-world policies and standards that shape how AI is developed and deployed. The underlying belief is that interdisciplinary research combining technical understanding with governance expertise, conducted within a world-class university setting with strong international networks, can meaningfully influence the trajectory of AI development toward safer and more beneficial outcomes.
Grants Received
from Long-Term Future Fund
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 3, 2026, 1:17 AM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC