Tilman Räuker
Bio
Tilman Räuker is Co-Director of Pivotal Research, a fellowship program supporting researchers working on global catastrophic risk reduction with a focus on technical AI safety and AI governance. He holds a Master's degree from Leibniz University Hannover, where his thesis focused on temporally-extended reinforcement learning in dynamic algorithm configuration. His research centers on mechanistic interpretability and understanding the internal representations of deep neural networks, including work on transformer world models. He co-authored the widely-cited survey "Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks" (SaTML 2023), as well as papers on structured world representations and causal world models in maze-solving transformers, published at NeurIPS and ICLR. He previously served as a Technical AI Safety Research Manager at the ERA Fellowship and led requests for proposals on Cybersecurity AI and AI Agent Evaluation at the AI Safety Fund. He also participated in a FAR Labs residency researching goal-directedness in transformer models.
Links
- Personal Website
- https://www.raeuker.com/
- Twitter / X
- LessWrong
- tilmanr
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 1:40 AM UTC
- Created
- Mar 20, 2026, 2:59 AM UTC