ML Alignment & Theory Scholars (MATS)
MATS (ML Alignment & Theory Scholars) is an independent research and educational program that connects talented researchers with top mentors in AI alignment, interpretability, governance, and security. Each cohort brings together approximately 100-120 fellows for an intensive 12-week research program in Berkeley, California or London, UK, with an optional 6-12 month funded extension. Fellows receive stipends, compute budgets, housing, meals, and dedicated research management support. Since late 2021, MATS has trained over 446 researchers, producing 170+ publications with 9,500+ collective citations, and approximately 80% of alumni now work directly in AI safety.
MATS (ML Alignment & Theory Scholars) is an independent research and educational program that connects talented researchers with top mentors in AI alignment, interpretability, governance, and security. Each cohort brings together approximately 100-120 fellows for an intensive 12-week research program in Berkeley, California or London, UK, with an optional 6-12 month funded extension. Fellows receive stipends, compute budgets, housing, meals, and dedicated research management support. Since late 2021, MATS has trained over 446 researchers, producing 170+ publications with 9,500+ collective citations, and approximately 80% of alumni now work directly in AI safety.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
MATS operates on the premise that AI alignment research is pre-paradigmatic, with diverse potentially promising research agendas. The program's theory of change centers on identifying exceptionally talented individuals, pairing them with established alignment researchers as mentors, and accelerating their development into independent researchers capable of pursuing original agendas. By supporting many different alignment research agendas simultaneously, MATS aims to decorrelate failure across approaches. The pipeline from fellowship to extension to full-time positions creates a sustained talent flow into AI safety organizations and labs. At scale, MATS functions as the primary feeder program for the AI safety research ecosystem, with 80% of alumni going on to work in the field and 10% co-founding new safety organizations.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Survival and Flourishing Fund
from Open Philanthropy
from Open Philanthropy
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Because the program’s marginal impact hinges on scarce mentor attention and agenda quality, rapid expansion may dilute research rigor or funnel trainees into lab-driven approaches of disputed alignment value, and with heavy Open Phil support the counterfactual value of additional funding could be limited.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: MATS pairs large cohorts of exceptional fellows with leading alignment mentors and robust support (stipends, compute, weekly research management), and has a demonstrated track record of converting trainees into productive researchers across diverse agendas (80% in AI safety, strong placements, new orgs), so funding them scales the field’s highest-leverage talent pipeline.