From AI to ZI is a personal blog and newsletter run by Robert Huben, a PhD mathematician and AI safety researcher. Launched in October 2022 and initially supported by a one-year Open Philanthropy grant, the blog documents Huben's self-directed study program in AI safety. Posts cover mechanistic interpretability, AI capabilities analysis, existential risk from unaligned AI, and related mathematical perspectives. Each post is rated on a scale from "AI" to "ZI" based on its relevance to AI safety. During his grant period, Huben also co-authored two research papers presented at NeurIPS 2023 workshops on sparse autoencoders and attention-only transformers. The blog has been semi-dormant since the grant ended in September 2023 but continues to publish occasional posts.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Huben's implicit theory of change is that increasing the number of technically skilled researchers working on AI safety — particularly from adjacent fields like mathematics — is important for reducing existential risk from advanced AI. By learning about and contributing to AI safety research (especially mechanistic interpretability), and by writing accessibly about AI safety topics to grow awareness and community knowledge, he aims to contribute to the field both as a researcher and as an educator. The blog also serves as a demonstration that people from non-CS backgrounds can make meaningful contributions to AI safety.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:09 PM UTC
- Created
- Mar 19, 2026, 10:31 PM UTC
