Future of Humanity Institute (FHI)
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford that investigated humanity's long-term prospects, with a particular focus on existential and global catastrophic risks, AI safety and alignment, AI governance, biosecurity, and the ethics of human enhancement. Founded by philosopher Nick Bostrom in 2005 under the Oxford Martin School (then the James Martin 21st Century School), FHI brought together researchers from philosophy, computer science, mathematics, and economics to study transformative and potentially dangerous technologies. At its peak the institute had around 40-50 researchers. It closed on 16 April 2024 following a multi-year hiring and fundraising freeze imposed by Oxford's Faculty of Philosophy and the non-renewal of remaining staff contracts.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
FHI believed that the most important lever for reducing existential risk was producing rigorous foundational intellectual work: identifying and precisely characterizing the nature of threats from transformative technologies (especially AI), developing analytical frameworks for reasoning about them, and disseminating those frameworks to the researchers, policymakers, and institution-builders best positioned to act on them. The causal chain ran from academic research to field-building — creating new concepts, training the next generation of researchers, and spinning off organizations (such as the Centre for the Governance of AI) that could translate insights into policy and technical practice. FHI also aimed to demonstrate that serious scholarship on humanity's long-term future was viable within a major research university, thereby legitimizing the field and attracting further talent and funding.
Grants Received
from Long-Term Future Fund
from Open Philanthropy
Projects
A two-year research fellowship at Oxford's Future of Humanity Institute that gave early-career researchers salaried positions to explore questions critical to humanity's long-term future, including AI safety, existential risk, and macrostrategy.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 19, 2026, 10:43 PM UTC