FAR.AI (Fund for Alignment Research) is a 501(c)(3) nonprofit research organization dedicated to ensuring advanced AI systems are safe, robust, and aligned with human values. Founded in July 2022 by Adam Gleave and Karl Berzins, the organization incubates and accelerates early-stage AI safety research agendas that are too resource-intensive for individual academics but too early-stage for industry. Their technical research spans robustness (finding vulnerabilities in superhuman AI systems), value alignment (developing sample-efficient value learning algorithms), and model evaluation (black-box and white-box evaluation methods). Beyond in-house research, FAR.AI builds the AI safety field through the Alignment Workshop series, the FAR.Labs coworking space in Berkeley, targeted grantmaking, and fellowship programs.
FAR.AI (Fund for Alignment Research) is a 501(c)(3) nonprofit research organization dedicated to ensuring advanced AI systems are safe, robust, and aligned with human values. Founded in July 2022 by Adam Gleave and Karl Berzins, the organization incubates and accelerates early-stage AI safety research agendas that are too resource-intensive for individual academics but too early-stage for industry. Their technical research spans robustness (finding vulnerabilities in superhuman AI systems), value alignment (developing sample-efficient value learning algorithms), and model evaluation (black-box and white-box evaluation methods). Beyond in-house research, FAR.AI builds the AI safety field through the Alignment Workshop series, the FAR.Labs coworking space in Berkeley, targeted grantmaking, and fellowship programs.
Funding Details
- Annual Budget
- $8,602,996
- Monthly Burn Rate
- $716,916
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
FAR.AI believes that AI safety is a global public good that receives substantially less investment than capability development. Their theory of change operates on multiple levels: (1) Conducting in-house technical research on robustness, alignment, and evaluation to identify vulnerabilities and develop safety techniques that could prove as transformative as RLHF was for the field, (2) Incubating early-stage research agendas that are too large for individual researchers but too early for industry, then scaling the most promising ones, (3) Building the AI safety field through workshops, fellowships, and a coworking space to grow the pipeline of safety researchers, (4) Bridging technical insights to policy through a technical governance division working with AI Safety Institutes and policymakers, and (5) Grantmaking to accelerate safety research at academic institutions. By combining direct research impact with field-building and policy influence, they aim to achieve major technical breakthroughs and influence safety standard adoption at frontier labs by 2028.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Survival and Flourishing Fund
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
Projects
The FAR.AI YouTube channel (@FARAIResearch) publishes recordings of AI safety talks, seminars, and workshop sessions organized by FAR.AI, the AI safety research nonprofit based in Berkeley.
A fiscally sponsored AI safety project led by Ethan Perez that funded research engineers to work on language model misalignment, which later evolved into part of FAR.AI (Frontier Alignment Research).
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Given their rapid expansion and broad portfolio, the main concern is diluted focus and low marginal counterfactual impact—many eval/robustness agendas are now pursued inside frontier labs and AI Safety Institutes—especially in light of >$30M recent commitments that may saturate their best growth opportunities.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: FAR.AI combines hits-based, large-team early-stage safety research with demonstrated technical credibility (e.g., Nature-published exploits in superhuman Go systems and red-teaming of leading LLMs) and unique convening/translation capacity (Alignment Workshops, FAR.Labs, and a new technical governance division) to turn promising alignment and evaluation ideas into standards adopted by frontier labs and policymakers.