Matthias Dellago
Bio
Matthias Dellago is a machine learning researcher who describes himself as a former physicist now focused on the science of deep learning, learning theory, and optimization. He received a Long-Term Future Fund stipend in October 2023 to support his master's thesis and an accompanying paper on mechanistic interpretability of attention mechanisms, with plans to publish on arXiv and release a tool for other researchers. His GitHub projects include a fork of TransformerLens and a project visualizing self-attention as a vector field, both consistent with his focus on mech interp of attention. He holds the title of Guest Researcher (likely at the University of Innsbruck based on online sources) and is active on LessWrong and the Alignment Forum under the handle matthias-dellago.
Links
- Personal Website
- https://matthiasdellago.github.io/
- Twitter / X
- LessWrong
- matthias-dellago
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 11:25 PM UTC
- Created
- Mar 20, 2026, 2:55 AM UTC