Transluce develops AI-driven tools for auditing and understanding AI systems, with the goal of enabling democratic oversight of AI at scale. Co-founded in October 2024 by Jacob Steinhardt (UC Berkeley) and Sarah Schwettmann (MIT CSAIL), the lab operates as a 501(c)(3) nonprofit and releases its core oversight infrastructure open-source. Its approach uses AI agents to automatically analyze large language models—generating neuron descriptions, building observability interfaces, and eliciting behaviors—making opaque systems comprehensible to researchers, governments, and civil society.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- $11,000,000
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Transluce believes that scalable democratic oversight of AI requires automated tools that can match the pace and complexity of modern AI development. Their causal chain is: (1) build open-source AI-driven tools that can automatically analyze and explain the internals and behaviors of large AI models; (2) put these tools in the hands of independent evaluators, governments, and civil society so that safety assessments are no longer controlled solely by commercial labs; (3) establish shared industry standards—through bodies like the AI Evaluator Forum—that normalize independent auditing; (4) use public audits and transparency to create accountability pressure that nudges AI developers toward safer deployment practices. By operating as a nonprofit that openly publishes its methods, Transluce aims to become a trusted, independent reference point that can credibly identify risks such as deception, hallucination, and misuse before they cause harm.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC
