AISafety.com: Self-study
About
The Self-Study section of AISafety.com is a free, curated repository of learning materials designed to help individuals develop expertise in AI safety through independent study. It is one of ten resource sections on AISafety.com, a broader hub for the AI safety ecosystem operated by Alignment Ecosystem Development (AED), an AI safety field-building nonprofit led by Søren Elverlin. Resources in the Self-Study section span multiple tracks. The Introductory track includes BlueDot Impact's Technical AI Safety and Frontier AI Governance programs, and DeepMind's 75-minute AGI Safety Short Course with recorded talks, exercises, and a workbook. The Technical Alignment track includes ARENA (Alignment Research Engineer Accelerator) for ML engineering upskilling, Harvard's graduate-level CS 2881 course on adversarial robustness, jailbreaks, and interpretability, CAIS's PhD-level Introduction to ML Safety survey course, and the Agent Foundations for Superintelligence-Robust Alignment guide. The Governance and Strategy tracks feature the European Network for AI Safety's Deep Dive policy course, BlueDot Impact's AGI Strategy curriculum, and MATS discussion groups. Reference materials include curated AI Alignment Forum sequences by researchers like Richard Ngo and Paul Christiano, Victoria Krakovna's alignment resource compilation, and CHAI's annotated bibliography. AISafety.com as a whole was launched in May 2024 and underwent a significant redesign in November 2025. The platform is maintained by a global team of volunteers and professionals. AED received a $99,330 grant from the Long-Term Future Fund in early 2024 to support 1.25 FTEs for one year, covering digital infrastructure work across approximately 15 projects including AISafety.com. The fiscal sponsor is Ashgro Inc. Content on the site is released under a CC BY-SA license, and donations are accepted via Every.org.
Theory of Change
By aggregating and surfacing high-quality AI safety learning materials in one accessible place, the Self-Study section lowers the barrier for newcomers to develop the skills and knowledge needed to contribute to AI safety research and governance. Increasing the pipeline of informed researchers, engineers, and policy professionals working on AI safety expands the field's capacity to solve alignment problems before transformative AI systems are deployed. The program targets both technical upskilling (through courses like ARENA and Harvard CS 2881) and governance capacity-building, on the theory that a larger, more capable AI safety field produces better safety outcomes.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:24 AM UTC
- Created
- Apr 3, 2026, 1:24 AM UTC