Alex Altair
Bio
Alex Altair (also known as Alex Powell Altair) is an independent AI alignment researcher based in Berkeley, California, specializing in agent foundations. He leads Dovetail Research, a group whose mission is to help humanity safely navigate the creation of powerful AI systems through foundational mathematics research. He was previously a MIRI fellow, a MATS scholar, and an AI Safety Camp research lead, and is a two-time college dropout who attended Worcester Polytechnic Institute and the University of Maine. His research focuses on the agent structure problem, optimization frameworks, Solomonoff induction, and abstract entropy as they relate to understanding the nature of agency and its implications for AI alignment. He has been conducting independent AI alignment research in agent foundations since early 2022 and has received funding from the Long-Term Future Fund (LTFF) for this work. He is an active contributor to LessWrong and the AI Alignment Forum, where he has published over 70 posts.
Links
- Personal Website
- https://www.alexaltair.com/
- Twitter / X
- LessWrong
- Alex_Altair
Grants
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 1:54 PM UTC
- Created
- Mar 20, 2026, 2:46 AM UTC