The AI Whistleblower Initiative (AIWI)
OAISIS, now operating as The AI Whistleblower Initiative (AIWI), is an independent nonprofit organization dedicated to supporting individuals working at the frontier of AI who wish to flag safety risks and concerning behavior. The organization provides a suite of services including Third Opinion (an anonymous expert consultation platform), pro bono legal support, AI Whistleblower Defense Fund grants, privacy and operational security guidance, and policy advocacy. AIWI is the only organization globally focused solely on systematically breaking down barriers for AI insiders through direct support, expert networks, advocacy, education, research, and policy work.
OAISIS, now operating as The AI Whistleblower Initiative (AIWI), is an independent nonprofit organization dedicated to supporting individuals working at the frontier of AI who wish to flag safety risks and concerning behavior. The organization provides a suite of services including Third Opinion (an anonymous expert consultation platform), pro bono legal support, AI Whistleblower Defense Fund grants, privacy and operational security guidance, and policy advocacy. AIWI is the only organization globally focused solely on systematically breaking down barriers for AI insiders through direct support, expert networks, advocacy, education, research, and policy work.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Whistleblower Netzwerk e.V.
Theory of Change
AIWI believes that for AI to be developed safely, information about risks and concerning behavior must be able to flow from insiders to those who can act on it. When internal reporting channels fail, whistleblowers serve as a critical safety mechanism. By systematically reducing barriers to whistleblowing through anonymous expert consultation, legal protection, financial support, and operational security, AIWI aims to ensure that safety-relevant information reaches the public, regulators, and the broader AI safety ecosystem. Their complementary policy advocacy pushes for structural transparency from AI companies, creating both bottom-up (supporting individual insiders) and top-down (company policy reform) pressure for better safety practices at frontier AI labs.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: The main concern is low counterfactual throughput and focus drift: frontier insiders may not trust or use the service at scale, and expanding beyond bespoke casework could yield low-signal disclosures and broad advocacy that fail to translate into concrete reductions in catastrophic AI risk.
Details
- Last Updated
- Apr 2, 2026, 9:59 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: By running Third Opinion—an audited, Tor-based anonymous consultation platform—alongside a defense fund and OPSEC/legal support, AIWI uniquely unlocks high-leverage insider disclosures from frontier labs that are otherwise bottlenecked, increasing the chance that pivotal safety-relevant information reaches regulators and the AI safety community when it matters.