Decode Research
Decode Research is a nonprofit AI safety research infrastructure organization focused on improving understanding of AI models and accelerating interpretability research. The organization builds and maintains key open-source tools including Neuronpedia (an interpretability platform for hosting, visualizing, and understanding Sparse Autoencoders), SAELens (a library for training and analyzing Sparse Autoencoders on language models), and circuit-tracer (a tool for finding circuits using transcoder features). By releasing open-source SAEs for popular models and providing free research infrastructure, Decode lowers barriers to entry for AI safety research and enables a broader community to contribute to understanding AI internals.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Players Philanthropy Fund, Inc.
Theory of Change
Decode Research believes that mechanistic interpretability, the science of reverse-engineering neural networks to understand their computational mechanisms, is essential for AI safety. By building free, open-source research infrastructure and platforms, they lower the barriers to entry for interpretability research, enabling more researchers worldwide to contribute to understanding AI internals. As AI systems become more powerful, this understanding will be critical for aligning them with human values and detecting potentially dangerous behaviors before deployment. Their tools allow researchers to train, visualize, and analyze Sparse Autoencoders, which decompose neural network activations into interpretable features, providing a path toward making AI models more transparent, understandable, editable, and safer.
Grants Received
from Survival and Flourishing Fund
Projects
Neuronpedia is an open-source interpretability platform for exploring, analyzing, and steering the internal features of AI language models. It serves as the primary public infrastructure for mechanistic interpretability research, particularly around sparse autoencoders (SAEs).
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:49 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC