The Secure AI Project is a 501(c)(4) nonprofit founded in December 2024 by Nick Beckstead and Thomas Woodside, headquartered in San Francisco. The organization advocates for legal requirements that major AI developers publish safety and security protocols, for whistleblower protections for those revealing unsafe practices, and for clear incentives to mitigate risk in accordance with industry best practices. Their work spans state-level legislative advocacy across multiple states, with a focus on transparency requirements for frontier AI developers.
The Secure AI Project is a 501(c)(4) nonprofit founded in December 2024 by Nick Beckstead and Thomas Woodside, headquartered in San Francisco. The organization advocates for legal requirements that major AI developers publish safety and security protocols, for whistleblower protections for those revealing unsafe practices, and for clear incentives to mitigate risk in accordance with industry best practices. Their work spans state-level legislative advocacy across multiple states, with a focus on transparency requirements for frontier AI developers.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The Secure AI Project believes that the largest AI developers need legally binding transparency and safety requirements to adequately manage the severe risks posed by frontier AI systems. By advocating for state and federal legislation mandating safety protocol disclosure, whistleblower protections, and adherence to industry best practices, they aim to create a regulatory environment where the biggest AI companies are held publicly accountable for their safety practices. Their theory is that transparency requirements will drive better safety behavior among frontier AI developers, that whistleblower protections will surface safety concerns that might otherwise be suppressed, and that these combined forces will reduce the probability of catastrophic harm from advanced AI.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: By prioritizing transparency mandates and whistleblower protections for the largest labs, their wins may mainly produce procedural compliance and lock in industry-defined “best practices,” risking weak substantive constraints and potential federal preemption—limiting counterfactual x-risk reduction even if many bills pass.
Details
- Last Updated
- Apr 2, 2026, 9:49 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: As a lean 501(c)(4) with seasoned EA policy operators who have already helped pass the first U.S. AI safety statute (CA SB 53) and the NY RAISE Act, Secure AI Project is uniquely positioned to scale bipartisan, binding transparency and whistleblower protections targeted at frontier labs, creating near-term legal pressure and templates that can ratchet safety norms and inform stronger federal rules.