CeSIA (Centre pour la Securite de l'IA) is an independent French non-profit dedicated to preventing major risks from AI through research, education, and advocacy. The organization offers Europe's first accredited university-level AI safety courses at ENS Ulm and Paris-Saclay University, develops open-source evaluation tools like the BELLS benchmark for testing AI monitoring systems, and publishes the AI Safety Atlas textbook. CeSIA also engages in institutional advocacy, contributing to EU AI Act implementation and collaborating with bodies like the OECD and UNESCO. It runs the international ML4Good bootcamp program and led the Global Call for AI Red Lines, which was presented at the UN General Assembly in September 2025.
CeSIA (Centre pour la Securite de l'IA) is an independent French non-profit dedicated to preventing major risks from AI through research, education, and advocacy. The organization offers Europe's first accredited university-level AI safety courses at ENS Ulm and Paris-Saclay University, develops open-source evaluation tools like the BELLS benchmark for testing AI monitoring systems, and publishes the AI Safety Atlas textbook. CeSIA also engages in institutional advocacy, contributing to EU AI Act implementation and collaborating with bodies like the OECD and UNESCO. It runs the international ML4Good bootcamp program and led the Global Call for AI Red Lines, which was presented at the UN General Assembly in September 2025.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- EffiSciences
Theory of Change
CeSIA believes that reducing catastrophic AI risks requires building a robust culture of AI safety in France and Europe, regions that play a decisive role in global AI governance. Their theory of change operates through three channels: (1) training the next generation of AI safety researchers and engineers through university courses, bootcamps, and open educational resources, thereby growing the field; (2) developing technical evaluation tools like the BELLS benchmark that enable third-party assessment of AI safeguards, creating accountability mechanisms for AI developers; and (3) translating technical AI safety concerns into actionable policy recommendations for European and international institutions, helping establish regulatory frameworks and red lines that prevent the most dangerous AI capabilities from being deployed without adequate safeguards.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: As a small, early-stage org stretching across education, technical benchmarking, and international advocacy, their theory of change relies on EU AI Act enforcement and ‘red lines’ becoming meaningfully adopted—both uncertain—so execution capacity and counterfactual impact may be limited.
Details
- Last Updated
- Apr 2, 2026, 10:09 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: CeSIA sits at the nexus of technical evaluation and EU policy, with demonstrated traction (their recommendations were adopted verbatim into the EU AI Act’s Code of Practice) and unique European talent-pipeline assets (the only accredited university AI safety courses and the AI Safety Atlas), positioning them to make EU enforcement and third‑party evaluations like BELLS actually bite.