David Reber
Bio
David Reber is a PhD student in Computer Science at the University of Chicago, advised by Victor Veitch and Ari Holtzman. His research centers on precise causal inference over large language models, with a focus on post-hoc internal interpretability and validating human-understandable concepts within these systems. He is motivated by AI safety applications such as monitoring long-term planning and detecting deception, and is also interested in fairness and adversarial robustness. Earlier in his PhD he worked on empirical and theoretical extensions of Cohen and Hutter's pessimistic conservative reinforcement learning agent under the guidance of Michael Cohen. He received multiple grants from the Long-Term Future Fund, beginning in 2021, supporting his early RL safety research and his transition into the AI safety field. He is an active contributor to the AI Alignment Forum and LessWrong under the handle derber, and has published at venues including ICML.
Links
- Personal Website
- https://www.davidpreber.com/
- Twitter / X
- LessWrong
- derber
Grants
from Long-Term Future Fund
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 3:33 PM UTC
- Created
- Mar 20, 2026, 2:50 AM UTC