The University of Oxford, founded in the 12th century, is the oldest university in the English-speaking world and a major center for AI safety and existential risk research. It was home to the Future of Humanity Institute (FHI), founded in 2005 by Nick Bostrom, which helped establish the fields of existential risk, AI alignment, and AI governance before closing in April 2024. Oxford continues to advance AI safety research through the Oxford Martin AI Governance Initiative, the Oxford Internet Institute's AI governance programme, and individual researchers working on technical AI safety. In 2025, Oxford researchers were awarded funding from the UK Government's ARIA Safeguarded AI programme to develop novel approaches to safe AI deployment.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Oxford's approach to reducing AI-related risks operates through multiple channels: producing foundational academic research that shapes the global understanding of AI risks and governance options; training the next generation of researchers and policymakers who will work on AI safety and governance; hosting interdisciplinary institutes that bridge technical AI research with philosophy, economics, and policy; and influencing government and international AI policy through direct engagement and published analysis. The university's prestige and convening power amplify the reach of its AI safety research, helping translate academic findings into real-world governance frameworks and safety standards.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC