FLI Fellowships
About
The Future of Life Institute's fellowship programs are a talent-pipeline initiative within FLI designed to cultivate the next generation of AI safety researchers. Launched in 2021 with support from a $25 million commitment by Ethereum co-founder Vitalik Buterin, the programs are administered jointly with the Beneficial AI Foundation (BAIF) and branded as the Vitalik Buterin Fellowships in AI Existential Safety. The fellowship portfolio comprises three tracks. The Technical PhD Fellowship in AI Existential Safety provides full tuition and fees for up to five years of doctoral study, an annual stipend of $40,000 for students at US, UK, or Canadian universities, and a $10,000 annual research fund for travel and computing. The Technical Postdoctoral Fellowship in AI Existential Safety offers an $80,000 annual stipend plus $10,000 research fund for up to three years. The US-China AI Governance PhD Fellowship, launched in 2024, mirrors the PhD stipend structure and focuses on scholarship about how international cooperation can mitigate risks from US-China AI competition, including global governance mechanisms, institutional designs for cooperation, and comparative risk management approaches. All technical fellowships support research into how AI could cause existential catastrophe and how to prevent it, spanning interpretability, AI objective alignment, formal verification, cybersecurity threats to advanced AI, and benchmark development. Non-existential applications such as autonomous vehicle safety or algorithmic bias are out of scope unless directly tied to broader existential risk reduction. A distinctive feature of the program is its conflict-of-interest clause: fellows must agree not to join AGI-racing companies (specifically Anthropic, Google DeepMind, Meta, OpenAI, and xAI) without strong binding regulatory advocacy within two years of completing the fellowship. Fellows who violate this commitment must donate half their gross monthly compensation to a mutually agreed-upon charity. The inaugural 2022 cohort selected eight PhD fellows and one postdoctoral fellow from nearly 100 applications. The 2025 US-China cohort included three fellows: Ruofei Wang (University of Southern Denmark), John Ferguson (University of Cambridge), and Kayla Blomquist (University of Oxford). Applications for the 2026 cycle closed in late 2025, with offers to be made by end of March 2026. Fellows participate in annual workshops and networking events coordinated by FLI and BAIF.
Theory of Change
FLI's fellowship program operates on the theory that the field of AI existential safety is talent-constrained, and that supporting early-career researchers financially enables them to pursue safety work they might otherwise be unable to afford. By covering tuition, stipends, and research expenses, FLI removes financial barriers to entering the field. The conflict-of-interest clause is designed to prevent fellows from being drawn into capability-advancing roles at frontier AI labs, preserving their independence and safety-focused orientation. Over time, a growing cohort of well-trained, financially independent safety researchers is expected to produce the technical and governance breakthroughs needed to reduce existential risk from advanced AI.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:18 AM UTC
- Created
- Apr 3, 2026, 1:18 AM UTC