Gideon Futerman
Bio
Gideon Futerman is a researcher focused on AI safety and existential risk, currently working as a Special Projects Associate at the Center for AI Safety (CAIS) and as a MATS (ML Alignment Theory Scholars) scholar on gradual disempowerment research. He studied Earth Sciences at St Edmund Hall, University of Oxford, where his academic interests first led him into existential risk research through the lens of solar radiation modification (SRM) and climate change. He has been affiliated with the Centre for the Study of Existential Risk (CSER) at Cambridge, contributing to work on how SRM interacts with global catastrophic risk scenarios. His AI safety research spans governance and policy approaches to advanced AI, including analysis of pathways to slowing AI development, international coordination to avoid an artificial superintelligence race, and the systemic risks of gradual human disempowerment as AI capabilities increase. He writes on these topics at his Substack and has published at AI Frontiers, The Oxford Scientist, and co-authored an SSRN paper on escalation pathways in the age of solar geoengineering.
Links
- Personal Website
- https://futerman.substack.com/
- Twitter / X
- LessWrong
- gideon-futerman
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 4:08 PM UTC
- Created
- Mar 20, 2026, 2:51 AM UTC