Zach Furman
Bio
Zach Furman is a PhD student at the University of Melbourne working on singular learning theory and the mathematical foundations of deep learning, advised by Liam Hodgkinson and collaborating closely with Daniel Murfet and Timaeus. His research aims to make AI safer by understanding how neural networks work using tools from mathematics and physics, with a focus on developmental interpretability. He holds an undergraduate degree in mathematics and computer science from Boston University, and prior to his PhD he worked in rocket engineering (embedded software, electrical, and aerospace engineering) and briefly conducted machine learning interpretability and condensed matter physics research. He is affiliated with FAR.AI as a researcher, where he contributed to the "Eliciting Latent Predictions from Transformers with the Tuned Lens" paper. He also co-authored "The Loss Kernel: A Geometric Probe for Deep Learning Interpretability" and a position paper on singular learning theory for AI safety. He received a $40,000 grant from the Long-Term Future Fund in October 2023 to support six months of research in Daniel Murfet's group at the University of Melbourne, with results targeting publication at academic ML conferences.
Links
- Personal Website
- https://zachfurman.com/
- Twitter / X
- LessWrong
- zach-furman
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 2:02 AM UTC
- Created
- Mar 20, 2026, 3:00 AM UTC