Dillon Bowen
Bio
Dillon Bowen is a researcher and engineer working on AI safety, currently a Member of Technical Staff on the Safety Systems team at OpenAI. He holds a PhD in Decision Processes from the Wharton School of Business, where he studied statistics, experiment design, and forecasting under Philip Tetlock, as well as a Graduate Diploma and MPhil in Economics from the University of Cambridge and a BA in Cognitive Science and Philosophy from Tufts University. Prior to OpenAI, he was a Research Scientist at FAR.AI focused on catastrophic risks from frontier models, and before that a principal data scientist at a London-based startup. He also conducted AI safety research through the ML Alignment and Theory Scholars (MATS) program and at UC Berkeley's Center for Human-Compatible AI (CHAI). His AI safety work includes co-developing the StrongREJECT jailbreak evaluation benchmark, research on data poisoning scaling laws, safety gap analysis for open-weight models, and work on decoding-time alignment for large language models. He received a Long-Term Future Fund grant to support his transition into an AI safety career.
Links
- Personal Website
- https://dsbowen.github.io/
- Twitter / X
- -
- LessWrong
- -
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 3:45 PM UTC
- Created
- Mar 20, 2026, 2:50 AM UTC