The Center for Law & AI Risk (CLAIR) aims to establish Law and AI Safety as a scholarly field, believing that law has a distinct role to play in ensuring powerful frontier AI systems are developed safely and responsibly. Co-directed by legal scholars Yonathan Arbel (University of Alabama) and Peter Salib (University of Houston), CLAIR convenes academics and researchers through roundtables, writers' retreats, and student programs to develop legal frameworks for AI risk governance. Their research spans administrative law, tort liability, constitutional law, international cooperation, and novel approaches such as AI legal personhood as a safety mechanism.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $613,000
- Fiscal Sponsor
- -
Theory of Change
CLAIR believes that law and legal institutions have a distinctive and underutilized role in reducing catastrophic and existential risks from advanced AI. Their theory of change centers on building a community of legal scholars who can develop the intellectual foundations for AI safety governance. By establishing Law and AI Safety as a recognized scholarly field, they aim to produce rigorous legal analysis that can inform policy, create liability frameworks that incentivize safe AI development, and develop novel legal tools such as AI legal personhood that could help align powerful AI systems with human interests. The causal chain runs from scholarly research to legal frameworks to governance structures that constrain dangerous AI development practices.
Grants Received
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:49 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
