Garrett Baker
Bio
Garrett Baker is an independent AI alignment researcher based in Berkeley, CA. He works on using singular learning theory (SLT), neuroscience, and reinforcement learning to build mathematically grounded theories for how values develop during training in ML systems. He has participated in the MATS program twice — as a MATS 3.0 scholar working on mechanistic interpretability of maze-solving agents under Alex Turner, and in the MATS 5.0/5.1 developmental interpretability stream — and has received funding via Manifund for both a MATS stipend and a full-time research salary. His research investigates epoch-wise critical periods in neural networks through an SLT lens, explores connections between ML inductive biases and neuroscience, and aims to create training stories that could produce inner-aligned AI. He is an active contributor to LessWrong and the AI Alignment Forum under the handle d0themath, with over 77 posts and 6,600 karma.
Links
- Personal Website
- https://garrettebaker.github.io/
- Twitter / X
- LessWrong
- d0themath
Grants
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 4:07 PM UTC
- Created
- Mar 20, 2026, 2:51 AM UTC