Carnegie Mellon University
Carnegie Mellon University (CMU) is a private research university founded in 1900, consistently ranked #1 in AI by U.S. News. CMU's AI safety and governance work spans its School of Computer Science, Software Engineering Institute, K&L Gates Ethics Initiative, Block Center for Technology and Society, and the student-led Carnegie Mellon AI Safety Initiative (CASI). Key faculty including Zico Kolter (who heads the ML Department and leads OpenAI's external safety review panel) work on adversarial robustness, AI unlearning, and AI evaluation. CMU also operates the federally funded Software Engineering Institute, which established the first AI Security Incident Response Team (AISIRT) for the Department of Defense.
Carnegie Mellon University (CMU) is a private research university founded in 1900, consistently ranked #1 in AI by U.S. News. CMU's AI safety and governance work spans its School of Computer Science, Software Engineering Institute, K&L Gates Ethics Initiative, Block Center for Technology and Society, and the student-led Carnegie Mellon AI Safety Initiative (CASI). Key faculty including Zico Kolter (who heads the ML Department and leads OpenAI's external safety review panel) work on adversarial robustness, AI unlearning, and AI evaluation. CMU also operates the federally funded Software Engineering Institute, which established the first AI Security Incident Response Team (AISIRT) for the Department of Defense.
Funding Details
- Annual Budget
- $1,800,000,000
- Monthly Burn Rate
- $150,000,000
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
CMU advances AI safety through a multi-pronged institutional strategy: producing technical research on robustness, alignment, and evaluation methods that inform the field broadly; training the next generation of AI safety researchers through degree programs, fellowships, and CASI's educational pipeline; influencing AI policy and governance frameworks by embedding researchers in federal advisory roles (NIST, DoD, AISIC, OpenAI safety panels); and applying safety research to high-consequence national security domains through the SEI. The underlying theory is that a research university with top-ranked AI programs can simultaneously advance technical safety methods, shape industry and government norms, and seed the broader AI safety talent pipeline at scale.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: As a massive, well-funded university with mixed incentives, marginal donations (likely routed via Conitzer’s program) face high fungibility and may prioritize robustness/governance or speculative multi-agent coordination over core alignment, yielding limited counterfactual x-risk reduction.
Details
- Last Updated
- Apr 2, 2026, 10:01 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC
Case for funding: CMU’s integrated AI safety/security ecosystem—SEI’s AISIRT and DoD ties, CyLab’s ML security work, the NIST-funded AI Measurement Center, AISIC participation, and leadership like Zico Kolter—uniquely positions it to translate rigorous robustness/evaluation research into widely adopted standards while scaling a high-quality safety talent pipeline.