BERI-CLTC Collaboration
About
The BERI-CLTC Collaboration is a partnership between the Berkeley Existential Risk Initiative (BERI) and the Center for Long-Term Cybersecurity (CLTC) at UC Berkeley, focused on developing AI risk management standards to reduce risks from advanced AI systems. The collaboration began as a trial collaboration in mid-2021, when BERI added CLTC to its portfolio of university research group partnerships. In June 2023, CLTC was converted to a main collaboration, the sixth such conversion in BERI's history, reflecting the demonstrated value of the standards work. The core work centers on creating AI risk management standards profiles with supplemental guidance for developers of increasingly general-purpose AI systems (GPAIS) and foundation models. The team's flagship publication, the AI Risk-Management Standards Profile for General-Purpose AI Systems and Foundation Models, adapts established frameworks such as the NIST AI Risk Management Framework and ISO/IEC 23894 to address risks specific to broadly capable AI systems. The profile covers governance, risk mapping, measurement, and management functions for AI developers. The collaboration is led by Dr. Anthony Barrett, who serves as both a Senior Policy Analyst at BERI and a Visiting Scholar at CLTC's AI Security Initiative. The team operates under the joint supervision of BERI's Executive Director and CLTC's AI Security Initiative Director. BERI employs the research staff, while CLTC at UC Berkeley provides the academic institutional home and research infrastructure. More recent work has expanded to cover intolerable risk threshold recommendations for AI, and in early 2026, the team published an Agentic AI Risk-Management Standards Profile addressing the unique risks posed by autonomous AI systems. The team brings together academic researchers, industry practitioners, and federal government representatives to collaborate on standards that bridge technical safety research and practical governance.
Theory of Change
By developing concrete, actionable AI risk management standards profiles and guidance documents aligned with recognized frameworks like the NIST AI RMF and ISO/IEC standards, the collaboration translates abstract AI safety concerns into practical, implementable governance tools for AI developers and policymakers. This creates a standards infrastructure that can influence how general-purpose AI systems are developed and deployed at scale, establishing safety norms before catastrophic risks materialize. The cross-sector approach -- bringing together academia, industry, and government -- aims to ensure these standards are both technically rigorous and practically adoptable.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:18 AM UTC
- Created
- Apr 3, 2026, 1:18 AM UTC