David Udell
Bio
David Udell is an independent AI alignment researcher and Content Manager at Iliad, an organization focused on applied mathematics research for AI alignment. He is based in Berkeley, CA. He participated in the SERI MATS program, where he worked on Team Shard's research on shard theory under mentors including Alex Turner (TurnTrout), and has since continued full-time alignment research. His research covers mechanistic interpretability, activation engineering, and alignment distillation: he co-authored work on steering language models via activation vectors, contributed to research on understanding and controlling maze-solving policy networks, and has worked on sparse circuit discovery for GPT-2-small. He has written extensively on LessWrong and the Alignment Forum, authoring a sequence of alignment distillations titled "Winding My Way Through Alignment" and numerous posts on shard theory, interpretability, and related topics. He has received multiple grants from the Long-Term Future Fund supporting his independent research. He is currently involved with the Iliad Fellowship and Iliad Intensive, programs offering mentored technical AI alignment research, and co-organized Agent Foundations 2026 at Carnegie Mellon University.
Links
- Personal Website
- -
- -
- Twitter / X
- -
- LessWrong
- david-udell
Grants
from Long-Term Future Fund
from Long-Term Future Fund
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 3:34 PM UTC
- Created
- Mar 20, 2026, 2:50 AM UTC