CivAI is a 501(c)(3) nonprofit that builds public understanding of AI capabilities and risks through concrete, interactive software demonstrations rather than academic research. Founded in 2023 by Lucas Hansen and Siddharth Hiregowdara, CivAI produces live demos covering cybersecurity threats, deepfakes, election security, misinformation, elder fraud, and biological risks, and presents these to targeted audiences across government, civil society, and media. The organization has delivered over 100 briefings to audiences including NIST staff, state lawmakers, law enforcement, and groups such as AARP, and is a founding member of the NIST AI Safety Institute Consortium.
CivAI is a 501(c)(3) nonprofit that builds public understanding of AI capabilities and risks through concrete, interactive software demonstrations rather than academic research. Founded in 2023 by Lucas Hansen and Siddharth Hiregowdara, CivAI produces live demos covering cybersecurity threats, deepfakes, election security, misinformation, elder fraud, and biological risks, and presents these to targeted audiences across government, civil society, and media. The organization has delivered over 100 briefings to audiences including NIST staff, state lawmakers, law enforcement, and groups such as AARP, and is a founding member of the NIST AI Safety Institute Consortium.
Funding Details
- Annual Budget
- $976,565
- Monthly Burn Rate
- $33,913
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
CivAI believes that policymakers, government officials, and the public lack a visceral understanding of what AI systems can actually do, which leads to inadequate regulation and preparedness. By building interactive software demonstrations that concretely show AI capabilities and risks - from deepfakes to bioweapons instructions to sophisticated phishing - and presenting these directly to decision-makers, CivAI aims to create a foundational education that enables better-informed policy decisions. The causal chain runs from hands-on demos to improved understanding among key stakeholders to more effective governance and accountability for AI developers, ultimately reducing the risk of harm from misused AI capabilities.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: By centering briefings on near-term misuse (phishing, deepfakes, step-by-step bio instructions), they may redirect scarce policy attention toward downstream content threats over upstream frontier governance, limiting counterfactual impact on existential risk.
Details
- Last Updated
- Apr 2, 2026, 9:49 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: CivAI has unusual access to U.S. policymakers and AISIC, and their high-fidelity interactive demos (including the widely covered bioweapons prompt tests) uniquely translate abstract frontier-model dangers into visceral understanding that catalyzes demand for evaluations, standards, and capability controls.