Cambridge ERA:AI Fellowship
About
The Cambridge ERA:AI Fellowship is the primary programme of the Existential Risk Alliance (ERA), a nonprofit talent organisation based in Cambridge, UK. ERA was founded in December 2022 as a spin-off from the Cambridge Existential Risks Initiative (CERI), which itself was established in April 2021 by Nandini Shiralkar during her undergraduate studies at Trinity College, Cambridge. CERI refocused to become a student society; ERA took over the fellowship programme and expanded it as an independent organisation. The fellowship is an 8-10 week fully funded, in-person research programme held in Cambridge. Fellows receive a salary equivalent to £34,125 per year (prorated to the programme duration), complimentary accommodation and meals during working hours, visa support, and travel expense reimbursement. Each fellow works closely with a dedicated expert mentor to develop and complete a research project in technical AI safety, AI governance, or technical AI governance. ERA has run the fellowship every summer since 2021, and in 2024 narrowed its focus from the full spectrum of existential risks (which previously included biosecurity and nuclear security) to AI safety and governance specifically. A winter cohort was subsequently added, with the Winter 2026 cohort running February to March 2026. As of early 2026, ERA supports 45+ fellows across programmes and hosts 30+ community events per cohort. The programme is run by a team of approximately 14 full-time staff including a Programme Director, seven research managers across the three focus areas, and a programmes team handling operations, community health, and fellow support. ERA's alumni have gone on to roles at RAND, the UK AI Security Institute, and other policy and research institutions. ERA operates as a fiscally sponsored project of Rethink Priorities and has received approximately $1.75 million in cumulative funding from Open Philanthropy as of December 2023, including a grant of £809,000 (~$1M) to support its fellowship programmes. Partnerships include the Centre for the Study of Existential Risk (CSER), the Leverhulme Centre for the Future of Intelligence, and the Krueger AI Safety Lab at Cambridge.
Theory of Change
ERA believes the central bottleneck to reducing catastrophic AI risk is a shortage of skilled researchers and practitioners working on AI safety and governance. By identifying high-potential early-career individuals and immersing them in an intensive funded fellowship with expert mentorship, rigorous research projects, and a strong peer community, ERA aims to accelerate their transition into productive AI safety careers. The causal chain is: talent identification and support leads to more and better-prepared researchers working on AI safety and governance, which increases the likelihood that technical and policy solutions are developed and implemented before transformative AI systems create irreversible harm. Alumni placement at organisations like the UK AI Security Institute and RAND represents the intended direct impact pathway.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- $1,750,000
- Last Updated
- Apr 3, 2026, 1:20 AM UTC
- Created
- Apr 3, 2026, 1:20 AM UTC