Convergence Analysis is a nonprofit research organization and think tank focused on designing a safe and flourishing future for humanity in a world with transformative AI. The organization builds sociotechnical reports modeling plausible and high-consequence AI scenarios, conducts research into key AI governance strategies and policy recommendations, and runs public awareness campaigns about AI risks. Operating across the US, UK, Canada, and Portugal, Convergence brings together an interdisciplinary team spanning technical AI alignment, ethics, governance, hardware research, philosophy, and mathematics.
Convergence Analysis is a nonprofit research organization and think tank focused on designing a safe and flourishing future for humanity in a world with transformative AI. The organization builds sociotechnical reports modeling plausible and high-consequence AI scenarios, conducts research into key AI governance strategies and policy recommendations, and runs public awareness campaigns about AI risks. Operating across the US, UK, Canada, and Portugal, Convergence brings together an interdisciplinary team spanning technical AI alignment, ethics, governance, hardware research, philosophy, and mathematics.
Funding Details
- Annual Budget
- $950,000
- Monthly Burn Rate
- $79,167
- Current Runway
- -
- Funding Goal
- $880,000
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Convergence Analysis believes the most critical task is to steer the evolution of AI technology in a direction that ensures it continues to advance human productivity and well-being while reducing the likelihood of existentially risky outcomes. Their theory of change operates through three channels: (1) building rigorous sociotechnical scenario research that maps plausible AI development pathways and their consequences, providing the analytical foundation for informed governance decisions; (2) translating scenario research into specific, actionable governance recommendations that directly inform policymakers and regulatory bodies (as demonstrated by their influence on the EU AI Act and US Bureau of Industry and Security rules); and (3) raising public awareness of AI risks through accessible media including books, podcasts, and public education, thereby building broader societal support for responsible AI governance.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
Projects
AI Clarity is the scenario planning research program of Convergence Analysis, exploring possible futures with transformative AI and evaluating strategies to mitigate existential risks through systematic scenario analysis.
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Counterfactual impact is uncertain: as a small early-stage think tank distributing effort across scenario reports and public awareness, their speculative modeling may have limited uptake among key U.S./EU decision-makers relative to more established governance orgs, risking high-quality output with low policy penetration.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: Convergence Analysis is uniquely focused on rigorous sociotechnical scenario planning (AI Clarity) that feeds directly into actionable governance proposals, with early but tangible policy traction (BIS proposed rules, EU AI Act GPAI Code, Threshold 2030 convening), making marginal funding a high-leverage way to convert short-timeline foresight into regulation.