The AI Governance and Safety Institute (AIGSI) is a nonpartisan 501(c)(3) nonprofit organization on a mission to ensure artificial intelligence and other technologies benefit humanity and are developed safely, securely, and in alignment with human values. AIGSI develops educational materials and targeted advertising campaigns to communicate core AI safety ideas to specific demographics, conducts research on alignment and interpretability, and engages with policymakers and the general public. The organization is led by Executive Director Mikhail Samin, a London-based effective altruist who also runs the related AI Safety and Governance Fund (a 501(c)(4) entity). AIGSI is at an early stage, with all current funding directed toward communications rather than staff salaries.
The AI Governance and Safety Institute (AIGSI) is a nonpartisan 501(c)(3) nonprofit organization on a mission to ensure artificial intelligence and other technologies benefit humanity and are developed safely, securely, and in alignment with human values. AIGSI develops educational materials and targeted advertising campaigns to communicate core AI safety ideas to specific demographics, conducts research on alignment and interpretability, and engages with policymakers and the general public. The organization is led by Executive Director Mikhail Samin, a London-based effective altruist who also runs the related AI Safety and Governance Fund (a 501(c)(4) entity). AIGSI is at an early stage, with all current funding directed toward communications rather than staff salaries.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
AIGSI believes that the primary bottleneck to reducing AI x-risk is insufficient awareness and concern among institutions and policymakers. By conducting targeted advertising campaigns and producing clear educational materials, AIGSI aims to shift public and institutional understanding of why advanced AI poses an extinction risk. Improved understanding is expected to increase political will for regulatory intervention — specifically, preventing any actor from developing superhuman AI before alignment is solved. The causal chain is: better communication of technical AI safety arguments → informed stakeholders and policymakers → stronger institutional responses (regulation, international agreements) → reduced probability of catastrophic AI outcomes.
Grants Received– no grants recorded
Projects
An interactive public education tool that explains AI existential risk to general audiences using a personalized AI chatbot, operated by the AI Governance and Safety Institute (AIGSI) and AI Safety and Governance Fund (AISGF).
People– no linked people
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 19, 2026, 10:31 PM UTC