Sam Clarke
Bio
Sam Clarke is a researcher working at the intersection of AI safety and AI governance. He studied computer science and philosophy at Oxford University, including a master's thesis applying Deep Bayesian Active Learning to the reward modeling approach to AI alignment. In late 2020 he relocated from New Zealand to Cambridge to work as a research assistant at the Leverhulme Centre for the Future of Intelligence, supporting Jess Whittlestone's work on mid-term AI impacts, which led to their co-authored paper 'A Survey of the Potential Long-term Impacts of AI' (AIES 2022). He subsequently held a researcher role at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, and later became Strategy Manager at the Centre for the Governance of AI (GovAI) in Oxford, where he researches actionable questions related to AI governance field-building strategy. He has also co-authored a chapter on the history of AI existential safety in 'The Era of Global Risk' (2023) and has written on the longtermist AI governance landscape, talent needs, and AI risk scenarios on the EA Forum and LessWrong.
Links
- Personal Website
- https://samsarana.github.io/
- Twitter / X
- -
- LessWrong
- sam-clarke
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 12:56 AM UTC
- Created
- Mar 20, 2026, 2:57 AM UTC