The Eisenstat Research Program is the independent AI alignment research effort of Sam Eisenstat, a mathematician and research fellow affiliated with the Machine Intelligence Research Institute (MIRI). The program focuses on foundational questions about reasoning and agency, approached through rigorous mathematical frameworks. Key research areas include logical uncertainty and Bayesian reasoning, condensation theory (a mathematical framework for understanding concept formation), factored space models for causal inference across abstraction levels, cooperative oracles, and decision theory. The program is fiscally sponsored through both MIRI and Ashgro, Inc., and is funded by the Survival and Flourishing Fund.
The Eisenstat Research Program is the independent AI alignment research effort of Sam Eisenstat, a mathematician and research fellow affiliated with the Machine Intelligence Research Institute (MIRI). The program focuses on foundational questions about reasoning and agency, approached through rigorous mathematical frameworks. Key research areas include logical uncertainty and Bayesian reasoning, condensation theory (a mathematical framework for understanding concept formation), factored space models for causal inference across abstraction levels, cooperative oracles, and decision theory. The program is fiscally sponsored through both MIRI and Ashgro, Inc., and is funded by the Survival and Flourishing Fund.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $736,000
- Fiscal Sponsor
- Machine Intelligence Research Institute & Ashgro, Inc.
Theory of Change
By developing rigorous mathematical foundations for understanding reasoning, agency, and concept formation, Eisenstat's work aims to provide the theoretical tools needed to build AI systems that are safe and aligned with human values. His condensation theory addresses how agents form and share concepts, which is relevant to ensuring AI systems develop understandable and compatible conceptual frameworks. His work on logical uncertainty and decision theory contributes to solving fundamental problems in how AI agents should reason and make decisions under uncertainty. The factored space models research addresses how to reason about causality across different levels of abstraction, which is important for understanding and controlling the behavior of complex AI systems.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: The main concern is that his MIRI-style foundational agenda (condensation theory, cooperative oracles, factored spaces) may not translate into practical leverage on current frontier ML systems, so uptake and counterfactual impact could be low, especially given the throughput limits of a one-person program.
Details
- Last Updated
- Mar 19, 2026, 6:22 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: Sam Eisenstat has a rare track record of producing novel and rigorous agent-foundations results (untrollable prior, logical inductor tiling) and is now pushing condensation theory and factored space models toward principled concept formation and multi-level causality, a high-leverage theoretical route to interpretable, alignable agents that few researchers are capable of advancing.