Shahar Avin
Bio
Shahar Avin is an AI safety researcher and Systemic Safety Fund Lead at the UK AI Security Institute (AISI), where he joined on secondment from the Centre for the Study of Existential Risk (CSER) at the University of Cambridge. He holds a BA and MSci in Natural Sciences and a PhD in History and Philosophy of Science, all from the University of Cambridge, with his doctoral thesis examining the rational allocation of public resources to scientific research. Before entering AI safety research, he worked as a software engineer at Google. At CSER, where he remains a Senior Research Associate, his work has focused on existential risk mitigation strategies, AI governance, and the use of roleplay and simulation games to explore AI futures, most notably creating Intelligence Rising, a tabletop strategy game used to stress-test assumptions about advanced AI development. He has co-authored influential publications including The Malicious Use of Artificial Intelligence (2018), Filling Gaps in Trustworthy Development of AI (2021), and Frontier AI Regulation (2023), as well as work on computing power as a lever for AI governance.
Links
- Personal Website
- https://www.shaharavin.com/
- Twitter / X
- LessWrong
- -
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 1:06 AM UTC
- Created
- Mar 20, 2026, 2:58 AM UTC