Cambridge AI Safety Hub (CAISH) is a community and educational organization based in Cambridge, UK, focused on developing the next generation of AI safety researchers and practitioners. Operating as a project of Meridian Impact CIC, CAISH runs several flagship programs: the Alignment Fellowship (a free 5-week in-person introductory course with technical and governance tracks), the Alignment Desk (a structured writing program for more experienced members), and MARS (Mentorship for Alignment Researchers, a part-time research program pairing mentees with established mentors for 10-12 weeks of research culminating in publishable work). The hub is embedded within the broader Cambridge AI safety ecosystem, with alumni and community members going on to roles at the UK AI Safety Institute, DeepMind, Apollo Research, Anthropic, and GovAI.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Meridian Impact CIC
Theory of Change
CAISH believes that the AI safety field is talent-constrained and that a key bottleneck is the pipeline of researchers and practitioners who deeply understand both the technical and governance dimensions of the problem. By embedding itself in Cambridge's research ecosystem and running high-quality, free educational and mentorship programs, CAISH aims to identify and develop talented individuals who would not otherwise enter AI safety work, accelerating their transition into productive roles at frontier labs, policy organizations, and academic research groups. The MARS program specifically targets the gap between introductory education and independent research by pairing promising students with mentors who can guide them to publishable, field-relevant work, thereby growing both the supply of safety researchers and the body of useful research.
Grants Received
from Open Philanthropy
Projects
Part-time research program run by Cambridge AI Safety Hub (CAISH), connecting aspiring researchers with experienced mentors to conduct AI safety research over 2-3 months.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:52 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC