FLI Podcast
About
The Future of Life Institute Podcast is an educational outreach program of the Future of Life Institute (FLI), a Cambridge, MA-based nonprofit dedicated to reducing global catastrophic and existential risk from powerful technologies. The podcast launched in 2015 and has grown to over 269 episodes as of early 2026, making it one of the longest-running and most comprehensive audio resources on existential risk and AI safety. The show has been hosted by several people over its history. Ariel Conn and Lucas Perry were earlier hosts and directors, with Perry notably leading the AI Alignment Podcast sub-series from roughly 2018 to 2020, which explored technical and non-technical aspects of AI alignment with researchers at organizations including DeepMind, OpenAI, and MIRI. The current host and director is Gus Docker, who studied philosophy and computer science at the University of Copenhagen and is active in Effective Altruism Denmark. Beyond the main feed, the podcast has produced notable limited series. The AI Alignment Podcast (2018-2020) hosted by Lucas Perry explored the technical and governance dimensions of AI alignment with researchers such as Stuart Russell, Geoffrey Irving, Jan Leike, and Evan Hubinger. Imagine a World (hosted by Guillaume Riesen) explored the winning entries of FLI's AI Worldbuilding Contest, asking what a positive 2045 with advanced AI might look like. The Not Cool Podcast covered climate change and related risks. The podcast is part of FLI's broader three-strand mission of grantmaking for risk reduction, educational outreach, and policy advocacy with institutions including the UN, US government, and EU. It is available on all major podcast platforms including Apple Podcasts, Spotify, YouTube, and others, and maintains a 4.8-star rating across platforms. New episodes are released biweekly and cover AI safety, superintelligence, biotechnology, nuclear risk, ethics, governance, and aspirational futures.
Theory of Change
The FLI Podcast aims to reduce existential risk by spreading awareness, building a shared understanding, and shaping discourse among researchers, policymakers, and the broader public. By conducting in-depth interviews with leading thinkers on AI safety, governance, and related topics, the podcast helps surface and disseminate important ideas, builds the intellectual community around existential risk reduction, and may influence the beliefs and priorities of listeners who include researchers, funders, and decision-makers. Educational outreach of this kind is treated by FLI as a complement to its grantmaking and policy advocacy work.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:18 AM UTC
- Created
- Apr 3, 2026, 1:18 AM UTC