SaferAI is a governance and research nonprofit based in Paris, France, focused on incentivizing responsible AI development through quantitative risk modeling, corporate accountability mechanisms, and technical standards. The organization independently evaluates leading AI companies' risk management practices through its public ratings system, develops quantitative models that translate AI capabilities into real-world risk assessments (with particular focus on cyber risk, CBRN threats, and loss of control), and actively contributes to AI governance standards including the EU AI Act Code of Practice, ISO/IEC and CEN-CENELEC standards, and the OECD G7 Hiroshima AI Process reporting framework.
SaferAI is a governance and research nonprofit based in Paris, France, focused on incentivizing responsible AI development through quantitative risk modeling, corporate accountability mechanisms, and technical standards. The organization independently evaluates leading AI companies' risk management practices through its public ratings system, develops quantitative models that translate AI capabilities into real-world risk assessments (with particular focus on cyber risk, CBRN threats, and loss of control), and actively contributes to AI governance standards including the EU AI Act Code of Practice, ISO/IEC and CEN-CENELEC standards, and the OECD G7 Hiroshima AI Process reporting framework.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
SaferAI operates on the theory that establishing rigorous, quantitative risk management infrastructure for AI can make regulation enforceable and incentivize responsible development practices. By creating transparent, independent ratings of AI companies' safety practices, they generate public accountability pressure and provide actionable information to policymakers, investors, and AI users. By contributing to international technical standards and policy frameworks (EU AI Act, ISO, OECD), they help ensure that safety requirements are concrete, measurable, and embedded in enforceable regulation. Their quantitative risk models aim to translate abstract AI capability concerns into specific, measurable harm assessments, bridging the gap between AI safety research and practical risk management that industry and regulators can act on.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: With Simeon Campos stepping down and the organization transitioning under new leadership, there is a real execution and theory-of-change risk that their ratings and standards work could entrench box-ticking compliance rather than meaningfully constraining catastrophic capability development.
Details
- Last Updated
- Apr 2, 2026, 10:01 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: They are uniquely positioned to operationalize AI safety into enforceable practice by combining quantitative risk modeling with influential standards development and an independent ratings platform that has already attracted EU/ISO attention and investor scrutiny, creating leverage over frontier labs’ risk management.