The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit based in Berkeley, California, founded in 2000 by Eliezer Yudkowsky. MIRI's technical and philosophical work helped found the field of AI alignment, and its researchers originated many of the theories and concepts central to today's discussions of AI safety. Following a major strategic pivot in 2024, MIRI shifted from prioritizing technical alignment research to focusing on policy and communications, arguing that alignment research alone is unlikely to succeed in time. MIRI now pursues three objectives: securing international agreements to halt progress toward artificial superintelligence, sharing its risk models with policymakers and the public, and continuing reduced-scale technical governance research.
The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit based in Berkeley, California, founded in 2000 by Eliezer Yudkowsky. MIRI's technical and philosophical work helped found the field of AI alignment, and its researchers originated many of the theories and concepts central to today's discussions of AI safety. Following a major strategic pivot in 2024, MIRI shifted from prioritizing technical alignment research to focusing on policy and communications, arguing that alignment research alone is unlikely to succeed in time. MIRI now pursues three objectives: securing international agreements to halt progress toward artificial superintelligence, sharing its risk models with policymakers and the public, and continuing reduced-scale technical governance research.
Funding Details
- Annual Budget
- $7,100,000
- Monthly Burn Rate
- $591,667
- Current Runway
- -
- Funding Goal
- $6,000,000
- Funding Raised to Date
- $48,000,000
- Fiscal Sponsor
- -
Theory of Change
MIRI believes that the default outcome of building artificial superintelligence is human extinction, and that technical alignment research alone is unlikely to succeed in time to prevent this. Their current theory of change centers on policy intervention: by communicating the extreme risks of ASI to policymakers, the public, and AI developers, MIRI aims to build support for a globally coordinated and collectively enforced moratorium on the development of ASI. In parallel, their technical governance research explores verification mechanisms for international agreements, limitations of current AI evaluation methods, and frameworks for AI governance that could reduce catastrophic risk. In the long term, MIRI hopes to see AI-empowered projects used to avert major AI mishaps while humanity develops the scientific and institutional maturity needed to make lasting decisions about the far future.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Open Philanthropy
from Survival and Flourishing Fund
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Their theory of change hinges on achieving an internationally coordinated ASI halt that is politically implausible in the near term, creating high execution and counterfactual impact risk and potential alienation of mainstream actors relative to more incremental governance or technical alignment approaches.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: MIRI combines decades of credibility shaping alignment discourse with a singular focus on advocating a globally enforced ASI moratorium and producing verification/enforcement mechanisms, making them uniquely positioned to shift the Overton window and furnish the technical scaffolding for a hard pause if you share their 'doom by default' model.