Adam Shimi
Bio
Adam Shimi is a French AI safety researcher currently working as a Policy Researcher at ControlAI, a non-profit focused on preventing the development of unsafe superintelligence, based in London. He completed his PhD in theoretical computer science at the Université de Toulouse (IRIT) in 2020, where his dissertation focused on distributed computing and the Heard-Of model, and holds an engineering degree in Computer Science and Applied Mathematics from ENSEEIHT (2014-2017). Prior to ControlAI, he was a co-founder and early staff member at Conjecture, an AI alignment research startup, where he also founded Refine, a three-month incubator for conceptual alignment researchers to develop new research directions. He has conducted independent AI alignment research with support from the Long-Term Future Fund and is an active contributor to the Alignment Forum and LessWrong, with over 124 posts and 875 comments. His research interests span epistemology of alignment, agent foundations, goal-directedness, and the methodology of AI safety research, and he blogs at For Methods (formethods.substack.com).
Links
- Personal Website
- https://adamshimi.github.io/
- Twitter / X
- LessWrong
- adamshimi
Grants
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 1:43 PM UTC
- Created
- Mar 20, 2026, 2:46 AM UTC