Impact Research Groups (IRG)
About
Impact Research Groups (IRG) is a structured research training programme operated by Arcadia Impact, a charitable organisation registered in England and Wales (Charity No. 1212323). The programme is designed to help talented and ambitious students take their first steps toward careers in high-impact research by working in small, mentored teams on real research questions. Over 8 weeks, participants explore a research question within one of five key focus areas: technical AI safety, AI governance, biosecurity, global health, or animal welfare. The programme culminates in a competitive review, with a £2,000 prize awarded to the best overall project. The kick-off weekend requires in-person attendance in London, while the remainder of the programme can be completed remotely. Cohorts have included approximately 40 participants. Past winning projects include 'The Data of Gradual Disempowerment: Measuring Systemic Existential Risks from Incremental AI Development' (Spring 2025 first place), 'A High-Level Comparative Analysis of Governance Structures at Frontier AGI Labs' (Winter 2024 second place), and work on mechanistic interpretability, pandemic preparedness, and alternative proteins. IRG sits within Arcadia Impact's broader portfolio of talent development initiatives, which also includes LASR Labs (technical AI safety research), the AI Governance Taskforce, the AI Safety Engineering Taskforce (ASET), and the Orion AI Governance Initiative. Arcadia Impact was established in 2022 (initially as London EA Hub) and has seven full-time staff. The programme is led by Alicia Pollard, Groups Lead at Arcadia Impact, with support from EA group presidents at LSE, UCL, and Northumbria University London. Arcadia Impact's primary funder is Coefficient Giving (formerly Open Philanthropy's Catastrophic Risk Capacity Building team), which has awarded over £8 million to Arcadia Impact since 2022.
Theory of Change
IRG's theory of change centers on talent pipeline development: by providing structured, mentored research experience to promising students early in their careers, the programme builds a generation of researchers equipped to work on the world's most pressing problems including AI safety and biosecurity. Participants gain hands-on research skills, exposure to the effective altruism and AI safety research communities, and concrete outputs they can use to launch careers at high-impact organisations. This supply-side intervention in the AI safety talent ecosystem is intended to compound over time as alumni go on to work at leading AI safety labs, policy bodies, and research organisations.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:21 AM UTC
- Created
- Apr 3, 2026, 1:21 AM UTC