Intelligence Rising is an educational training workshop that uses a strategic roleplay game to immerse participants in realistic AI futures scenarios. Participants take on the roles of government officials, AI advisors, and technology company executives, navigating a simulated decade of AI development to explore coordination challenges, risk dynamics, and policy decision points. Originally developed by Dr. Shahar Avin at Cambridge's Centre for the Study of Existential Risk and inspired by wargaming practices, it has been run for audiences in government, AI labs, industry, academia, NGOs, and think tanks across the US, UK, and EU.
Intelligence Rising is an educational training workshop that uses a strategic roleplay game to immerse participants in realistic AI futures scenarios. Participants take on the roles of government officials, AI advisors, and technology company executives, navigating a simulated decade of AI development to explore coordination challenges, risk dynamics, and policy decision points. Originally developed by Dr. Shahar Avin at Cambridge's Centre for the Study of Existential Risk and inspired by wargaming practices, it has been run for audiences in government, AI labs, industry, academia, NGOs, and think tanks across the US, UK, and EU.
Funding Details
- Annual Budget
- $306,497
- Monthly Burn Rate
- $25,508
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Intelligence Rising operates on the theory that realistic experiential simulation is an effective way to build genuine understanding of AI risk dynamics among key decision-makers. By placing participants in roles as government officials and AI company executives navigating competitive AI development over a simulated decade, the workshop creates a felt sense of the coordination failures, race dynamics, and critical decision junctures that could lead to unsafe outcomes. Participants who have experienced these dynamics firsthand are expected to be better equipped to advocate for safety-conscious policies, support international coordination mechanisms, and make more informed decisions in their real-world roles. The causal chain runs from simulation experience to improved mental models of AI risk, to changed behavior and advocacy by influential actors in government, industry, and civil society.
Grants Received– no grants recorded
Projects
A UK charity that develops and deploys Intelligence Rising, a strategic role-playing game designed to help decision-makers understand the risks, tensions, and governance challenges of AI development through facilitated wargaming exercises.
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: The main concern is weak external-validity/measurement of lasting behavior change among truly high-leverage actors, compounded by a small facilitation-dependent team and fee-based demand that limit throughput and make the counterfactual value of philanthropic funding uncertain.
Details
- Last Updated
- Apr 2, 2026, 9:59 PM UTC
- Created
- Mar 19, 2026, 10:31 PM UTC
Case for funding: Intelligence Rising can scale a realistic, multi-stakeholder AI race/coordination simulation—already used by government, labs, and think tanks and distilled in their 2024 insights paper—that upgrades decision-makers’ mental models in ways reports cannot, increasing the odds of safety-conscious policy and cross-actor coordination.