
London AI Safety Research (LASR) Labs
LASR Labs (London AI Safety Research Labs) is a technical AI safety research program run by Arcadia Impact that trains researchers through intensive, supervised project work. Participants work full-time and in-person in London, in teams of three to four under the supervision of experienced AI safety researchers, producing academic-style papers and blog posts over 13 weeks. The program focuses on action-relevant questions addressing concrete threat models around loss of control to advanced AI, covering areas such as interpretability, AI control, multi-agent systems, LLM deception, and scalable oversight. Alumni have gone on to roles at the UK AI Safety Institute, Apollo Research, Leap Labs, Open Philanthropy, and PhD programs at top universities.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Arcadia Impact
Theory of Change
LASR Labs believes that a major bottleneck to AI safety progress is the shortage of researchers with both strong technical skills and deep familiarity with AI safety problems. By running intensive, supervised research programs that take participants from proposal to published paper in 13 weeks, LASR Labs rapidly develops researchers who are ready for full-time AI safety roles. Publishing peer-reviewed work also directly advances the technical AI safety field, particularly in high-priority areas like interpretability, AI control, and deception detection. The program's concentration in London helps build a durable local AI safety research community and ecosystem.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:59 PM UTC
- Created
- Mar 19, 2026, 10:32 PM UTC