The Existential Risk Observatory (XRO) is a foundation under Dutch law based in Amsterdam that aims to reduce human extinction risk by informing public debate. Founded in May 2021 by physicist Otto Barten, the organization focuses on raising awareness of existential risks, particularly from advanced AI, through op-eds in major media outlets, policy proposals to governments, research on effective risk communication, and public events such as the AI Safety Summit Talks series. In 2024, the Campaign for AI Safety merged with XRO, expanding its advocacy capacity. The organization's most prominent initiative is the Conditional AI Safety Treaty, a proposal published in TIME magazine outlining an international framework for pausing unsafe AI development.
The Existential Risk Observatory (XRO) is a foundation under Dutch law based in Amsterdam that aims to reduce human extinction risk by informing public debate. Founded in May 2021 by physicist Otto Barten, the organization focuses on raising awareness of existential risks, particularly from advanced AI, through op-eds in major media outlets, policy proposals to governments, research on effective risk communication, and public events such as the AI Safety Summit Talks series. In 2024, the Campaign for AI Safety merged with XRO, expanding its advocacy capacity. The organization's most prominent initiative is the Conditional AI Safety Treaty, a proposal published in TIME magazine outlining an international framework for pausing unsafe AI development.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The Existential Risk Observatory believes that public awareness of existential risks is a critical and neglected lever for reducing those risks. Their theory is that by communicating existential risks effectively to the general public through major media outlets, policy channels, and public events, they can expand the talent pipeline entering x-risk work, increase funding for risk reduction, stimulate the creation of new research institutes, raise the political priority of existential risk mitigation, and diversify the approaches being pursued to address these risks. They measure progress through media coverage metrics and public awareness surveys. On the policy front, they believe that concrete, well-designed governance proposals such as the Conditional AI Safety Treaty can provide actionable frameworks for governments to manage AI risks before capabilities outpace safety measures.
Grants Received
from Long-Term Future Fund
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: The conditional pause/treaty strategy may be politically infeasible or polarizing, and given XRO’s small, volunteer-heavy team, the work may mostly yield publicity rather than durable policy wins, limiting counterfactual impact relative to more established governance orgs.
Details
- Last Updated
- Apr 2, 2026, 10:00 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: They are a nimble, cost-effective comms and policy shop that has already secured mainstream coverage (e.g., TIME), convened top figures (e.g., Bengio/Tallinn), and is advancing a concrete Conditional AI Safety Treaty to operationalize a pause via Safety Institutes—an underexplored lever that could meaningfully reduce AI x-risk if adopted.