Encode AI Corporation (formerly Encode Justice) is a youth-led nonprofit advancing AI safety, governance, and accountability through nonpartisan legislative advocacy and public education. Founded in 2020 by Sneha Revanur, the organization is headquartered in Washington, DC with a California office in Sacramento. Encode mobilizes a 9-person core team and an international network of young volunteers to develop and pass AI safety legislation, hold AI companies accountable, and ensure AI development serves the public interest. The organization does not accept funding from corporations, foreign governments, or executives at top AI companies.
Encode AI Corporation (formerly Encode Justice) is a youth-led nonprofit advancing AI safety, governance, and accountability through nonpartisan legislative advocacy and public education. Founded in 2020 by Sneha Revanur, the organization is headquartered in Washington, DC with a California office in Sacramento. Encode mobilizes a 9-person core team and an international network of young volunteers to develop and pass AI safety legislation, hold AI companies accountable, and ensure AI development serves the public interest. The organization does not accept funding from corporations, foreign governments, or executives at top AI companies.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Encode believes that the young people who will bear the greatest consequences of AI development should have a meaningful voice in shaping AI policy. Their theory of change rests on mobilizing youth advocates to drive nonpartisan legislative action at the state and federal level, creating concrete legal guardrails for AI development. By sponsoring and passing legislation like California's SB 53 (transparency requirements for frontier AI) and restrictions on AI in nuclear weapons systems, Encode aims to establish binding accountability mechanisms for AI developers. The organization also works to counterbalance industry lobbying power through coalition-building, public education, and strategic legal interventions such as amicus briefs in cases like Musk v. OpenAI. Their approach treats both present-day AI harms (algorithmic bias, deepfakes, surveillance) and catastrophic future risks as requiring immediate policy action, bridging the gap between near-term and long-term AI safety concerns.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Professionalizing a youth-led, nontraditional policy shop raises real risk of ideological drift toward broad AI ethics and near-term harms and pursuit of symbolic transparency bills, diluting focus on technically grounded capability/compute governance that most reduces existential risk.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: They have a demonstrated ability to turn a large, independent youth network into concrete US policy wins that constrain frontier AI (e.g., CA SB 53 transparency and the NDAA nuclear-AI restriction), offering unusually high leverage per dollar to counter industry capture and advance binding guardrails at state and federal levels.