Orthogonal is a non-profit alignment research organization founded in 2023 by Tamsin Leake and based in Europe. The organization pursues the formal alignment flavor of agent foundations, aiming to solve alignment in a manner that scales to superintelligence. Their primary research agenda centers on QACI (Question-Answer Counterfactual Interval), a framework for building fully formalized mathematical goals that lead to good outcomes when pursued by an AI system. Orthogonal's approach emphasizes designing aligned AI from scratch rather than retrofitting alignment onto existing systems, and they share MIRI's concern that most current alignment approaches may be insufficient for the challenge posed by advanced AI.
Orthogonal is a non-profit alignment research organization founded in 2023 by Tamsin Leake and based in Europe. The organization pursues the formal alignment flavor of agent foundations, aiming to solve alignment in a manner that scales to superintelligence. Their primary research agenda centers on QACI (Question-Answer Counterfactual Interval), a framework for building fully formalized mathematical goals that lead to good outcomes when pursued by an AI system. Orthogonal's approach emphasizes designing aligned AI from scratch rather than retrofitting alignment onto existing systems, and they share MIRI's concern that most current alignment approaches may be insufficient for the challenge posed by advanced AI.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Ashgro Inc
Theory of Change
Orthogonal believes that most current AI alignment approaches are insufficient to handle superintelligence, and that alignment must be built into AI systems from the ground up using rigorous mathematical formalization. Their theory of change uses backchaining: starting with a plausible scenario in which existential risk from AI is averted, then working backward to identify what research is needed. The causal chain runs from developing QACI (a fully formal mathematical goal specification that leads to good outcomes when pursued) to designing AI architectures that provably pursue such formal goals, to deploying systems that are aligned by construction rather than by post-hoc correction. They position their work as the object-level research that other strategies (like cyborgism or buying-time approaches) would want to accelerate.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Details
- Last Updated
- Apr 2, 2026, 10:11 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: As one of the few groups seriously pursuing an aligned-by-construction path via a fully formal goal specification (QACI) within agent foundations, Orthogonal offers a neglected, high-upside bet aimed at hard-takeoff scenarios where only mathematically grounded guarantees are likely to matter.