Johannes Heidecke
Bio
Johannes Heidecke is the Head of Safety Systems at OpenAI, where he leads work on model safety behavior and alignment. He completed a Master's degree in Artificial Intelligence in Barcelona and participated in the MATS Summer 2022 Cohort under the mentorship of Evan Hubinger. Early in his career he organized the second AI Safety Camp in Prague, a retreat for nearly 30 aspiring AI alignment researchers, and received funding to support this field-building work. At OpenAI he has co-authored influential safety research including "Deliberative Alignment: Reasoning Enables Safer Language Models" (2024) and "Improving Model Safety Behavior with Rule-Based Rewards," both of which have shaped how OpenAI's o-series models handle safety-critical decisions. He has spoken at the 2025 Singapore Conference on AI and has been quoted on OpenAI's preparedness framework and the risks posed by advanced reasoning models.
Links
- Personal Website
- https://johannesheidecke.github.io/
- Twitter / X
- LessWrong
- johannes-heidecke
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 10:17 PM UTC
- Created
- Mar 20, 2026, 2:52 AM UTC