Transluce develops AI-driven tools for auditing and understanding AI systems, with the goal of enabling democratic oversight of AI at scale. Co-founded in October 2024 by Jacob Steinhardt (UC Berkeley) and Sarah Schwettmann (MIT CSAIL), the lab operates as a 501(c)(3) nonprofit and releases its core oversight infrastructure open-source. Its approach uses AI agents to automatically analyze large language models—generating neuron descriptions, building observability interfaces, and eliciting behaviors—making opaque systems comprehensible to researchers, governments, and civil society.
Transluce develops AI-driven tools for auditing and understanding AI systems, with the goal of enabling democratic oversight of AI at scale. Co-founded in October 2024 by Jacob Steinhardt (UC Berkeley) and Sarah Schwettmann (MIT CSAIL), the lab operates as a 501(c)(3) nonprofit and releases its core oversight infrastructure open-source. Its approach uses AI agents to automatically analyze large language models—generating neuron descriptions, building observability interfaces, and eliciting behaviors—making opaque systems comprehensible to researchers, governments, and civil society.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- $11,000,000
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Transluce believes that scalable democratic oversight of AI requires automated tools that can match the pace and complexity of modern AI development. Their causal chain is: (1) build open-source AI-driven tools that can automatically analyze and explain the internals and behaviors of large AI models; (2) put these tools in the hands of independent evaluators, governments, and civil society so that safety assessments are no longer controlled solely by commercial labs; (3) establish shared industry standards—through bodies like the AI Evaluator Forum—that normalize independent auditing; (4) use public audits and transparency to create accountability pressure that nudges AI developers toward safer deployment practices. By operating as a nonprofit that openly publishes its methods, Transluce aims to become a trusted, independent reference point that can credibly identify risks such as deception, hallucination, and misuse before they cause harm.
Grants Received– no grants recorded
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Their impact is bottlenecked by access and adoption: if labs withhold robust evaluator access or their automated interpretability/evaluation methods fail to generalize to deceptive frontier systems, the ambitious, compute-intensive scale-up could have low counterfactual value and risk capability spillovers.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC
Case for funding: Transluce, led by Jacob Steinhardt and Sarah Schwettmann, is building open, automated oversight tools and evaluator standards (AEF-1) that enable independent audits of frontier models at scale, a high-leverage way to shift industry norms and empower governments and civil society to impose accountability on AI deployment.