AI Safety, Ethics and Society (AISES)
About
AI Safety, Ethics and Society (AISES) is an educational initiative of the Center for AI Safety (CAIS), a San Francisco-based 501(c)(3) nonprofit dedicated to reducing societal-scale risks from artificial intelligence. The project was created by Dan Hendrycks, CAIS's Executive Director, and launched in May 2024. At the core of AISES is a comprehensive, freely available textbook titled Introduction to AI Safety, Ethics, and Society, published by Taylor & Francis/Routledge in 2024 (ISBN: 9781032798028). The textbook is available online at aisafetybook.com, as a print edition, and as an audiobook on Spotify. It takes an interdisciplinary approach drawing on engineering, economics, ethics, and policy to cover eight major topic areas: catastrophic AI risks (malicious use, accidents, and rogue AI scenarios), AI fundamentals and scaling laws, single-agent safety challenges, safety engineering methodologies, complex systems analysis, beneficial AI and machine ethics, collective action problems and game theory, and governance frameworks at the corporate, national, and international level. Alongside the textbook, AISES runs a free virtual course in cohort format. Each cohort runs for approximately nine weeks of interactive small-group discussions facilitated by experienced facilitators, followed by a four-week personal project phase. The time commitment is roughly five hours per week. The course is free of charge and open to anyone with reliable internet access. AISES is classified by CAIS as a field-building initiative under their educational content and infrastructure work. Its explicit goal is to expand the pipeline of people — including non-engineers — who understand AI safety issues and can contribute to addressing them. As of late 2025, the most recent cohort ran November 2025 through February 2026. The project does not operate independently from CAIS and has no separate budget or legal entity.
Theory of Change
AISES operates on the theory that reducing AI-related catastrophic risk requires broad societal understanding, not just technical research. By making high-quality, interdisciplinary AI safety education freely available to students, policymakers, journalists, and the public, the project aims to grow the number of informed people who can contribute to AI safety efforts — whether as researchers, advocates, policymakers, or engaged citizens. Increasing the pipeline of informed participants across technical, governance, and ethics domains expands society's collective capacity to develop appropriate norms, regulations, and institutions before transformative AI systems are deployed.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:20 AM UTC
- Created
- Apr 3, 2026, 1:20 AM UTC