AISafety.com: Media Channels
About
The Media Channels section is a program within AISafety.com, a volunteer-maintained platform founded and funded by Søren Elverlin to serve as a central hub for AI safety resources. AISafety.com describes itself as maintained by a global team of volunteers and professionals who believe AI poses a grave risk of extinction to humanity. It covers ten resource categories including self-study courses, communities, events, jobs, funders, and media channels. The Media Channels directory launched around March 2025 and aggregates more than 50 curated sources across eight formats: articles, blogs, books, forums, newsletters, podcasts, Twitter/X accounts, and YouTube channels. Notable entries include the 80,000 Hours Podcast, AXRP, the Dwarkesh Podcast, the Future of Life Institute Podcast, Import AI (Jack Clark), the Center for AI Safety Newsletter, ChinAI, Robert Miles AI Safety YouTube channel, Rational Animations, the AI Alignment Forum, LessWrong, and the Effective Altruism Forum, among many others. The directory is designed to address two challenges: helping newcomers learn about AI safety in their preferred format, and helping everyone stay current with the rapidly evolving field. Users can filter sources by format, and the team explicitly positions the page as useful for field-builders who need a convenient place to direct others. Content is kept high quality through community-submitted corrections via a feedback form. AISafety.com as a whole was developed on a shoestring budget funded personally by Elverlin and sustained primarily by volunteers. The site underwent a significant redesign announced in November 2025. The core team includes Søren Elverlin (project lead), Melissa Samworth (designer and frontend developer), Bryce Robertson (QA and resources, manager of Media Channels), and nemo (backend development), along with over 20 additional volunteer contributors.
Theory of Change
By consolidating high-quality AI safety information sources into a single, filterable, and regularly updated directory, the Media Channels program lowers the barrier for newcomers to discover and engage with the AI safety field. Greater awareness and informed engagement from a broader audience is expected to expand the pipeline of researchers, policymakers, and funders working on reducing existential risk from advanced AI. Field-builders and community organizers benefit from having a reliable, curated resource to share with people exploring how to contribute.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:24 AM UTC
- Created
- Apr 3, 2026, 1:24 AM UTC