The Center for Law & AI Risk (CLAIR) aims to establish Law and AI Safety as a scholarly field, believing that law has a distinct role to play in ensuring powerful frontier AI systems are developed safely and responsibly. Co-directed by legal scholars Yonathan Arbel (University of Alabama) and Peter Salib (University of Houston), CLAIR convenes academics and researchers through roundtables, writers' retreats, and student programs to develop legal frameworks for AI risk governance. Their research spans administrative law, tort liability, constitutional law, international cooperation, and novel approaches such as AI legal personhood as a safety mechanism.
The Center for Law & AI Risk (CLAIR) aims to establish Law and AI Safety as a scholarly field, believing that law has a distinct role to play in ensuring powerful frontier AI systems are developed safely and responsibly. Co-directed by legal scholars Yonathan Arbel (University of Alabama) and Peter Salib (University of Houston), CLAIR convenes academics and researchers through roundtables, writers' retreats, and student programs to develop legal frameworks for AI risk governance. Their research spans administrative law, tort liability, constitutional law, international cooperation, and novel approaches such as AI legal personhood as a safety mechanism.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $613,000
- Fiscal Sponsor
- -
Theory of Change
CLAIR believes that law and legal institutions have a distinctive and underutilized role in reducing catastrophic and existential risks from advanced AI. Their theory of change centers on building a community of legal scholars who can develop the intellectual foundations for AI safety governance. By establishing Law and AI Safety as a recognized scholarly field, they aim to produce rigorous legal analysis that can inform policy, create liability frameworks that incentivize safe AI development, and develop novel legal tools such as AI legal personhood that could help align powerful AI systems with human interests. The causal chain runs from scholarly research to legal frameworks to governance structures that constrain dangerous AI development practices.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Their theory of change hinges on academic field-building and proposals like AI legal personhood catalyzing timely, binding policy, but law-school hiring politics and the slow, uncertain translation of scholarship into regulation risk low counterfactual impact before frontier capabilities race ahead.
Details
- Last Updated
- Apr 2, 2026, 9:49 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: CLAIR offers unusually high leverage per dollar by nudging already-salaried law professors and fellows into a durable AI safety law field—via targeted support that converts short-term fellowships into permanent professorships and produces concrete scholarship (e.g., the Lawfare agenda and Alabama roundtable) that can shape liability, administrative, and constitutional frameworks for frontier AI.