The Legal Priorities Project (LPP), rebranded in February 2024 as the Institute for Law & AI (LawAI), is an independent think tank founded by researchers from Harvard University. The organization conducts rigorous legal research at the intersection of law and AI governance, advises governments and international organizations on AI regulation, and builds the field of AI law through programs like its Summer Research Fellowship and Summer Institute. Originally focused broadly on legal priorities research for existential risk reduction, LPP has sharpened its focus to concentrate on the legal challenges posed by artificial intelligence, producing policy briefings, foundational research, and direct advisory work for policymakers.
The Legal Priorities Project (LPP), rebranded in February 2024 as the Institute for Law & AI (LawAI), is an independent think tank founded by researchers from Harvard University. The organization conducts rigorous legal research at the intersection of law and AI governance, advises governments and international organizations on AI regulation, and builds the field of AI law through programs like its Summer Research Fellowship and Summer Institute. Originally focused broadly on legal priorities research for existential risk reduction, LPP has sharpened its focus to concentrate on the legal challenges posed by artificial intelligence, producing policy briefings, foundational research, and direct advisory work for policymakers.
Funding Details
- Annual Budget
- $2,055,293
- Monthly Burn Rate
- $171,274
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $11,181,672
- Fiscal Sponsor
- -
Theory of Change
LawAI believes that as AI systems become more powerful and widely deployed, the legal and regulatory frameworks governing them will be critically important for ensuring these systems are safe, beneficial, and consistent with the rule of law. Their theory of change operates on multiple levels: producing foundational research that improves policymakers' understanding of AI governance questions; providing direct advisory services to governments and international organizations drafting AI regulations, so that laws are well-designed rather than counterproductive; training the next generation of legal experts in AI governance through fellowships and educational programs, building a pipeline of talent for this critically understaffed field; and publishing research that shapes academic and policy discourse on how legal systems should respond to advanced AI. By combining rigorous legal scholarship with practical policy engagement and field-building, they aim to ensure that legal frameworks are in place to mitigate catastrophic risks from AI while supporting beneficial development.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Long-Term Future Fund
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Their rapid scale-up from ~7 to ~30 staff and broad program scope create execution and focus risks—research quality and policy uptake may lag, with marginal work duplicating other governance orgs and limited evidence that their analyses will translate into binding, safety-critical regulations.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: LawAI couples rigorous AI-law research with direct government advisory and a proven talent pipeline (Summer Institute/Fellowships), positioning an experienced team (Winter, O'Keefe, Arnold, Maas, Leung) to shape concrete regulatory tools and governance institutions at the moment when frontier AI law is being written.