Charlie Rogers-Smith
Bio
Charlie Rogers-Smith is Chief of Staff at Palisade Research, an organization that studies dangerous AI capabilities to better understand misuse risks and advises policymakers on AI risks. He holds an MSc in Statistics from the University of Oxford and a BSc in Mathematics from the University of St Andrews, and has conducted research at Aalto University, Imperial College London, and the Future of Humanity Institute at Oxford. He previously worked as an instructor at the Center for Applied Rationality (CFAR) and has done predoctoral ML research at Oxford and Cambridge, including interpretability work with Adrian Weller. His published research includes co-authoring the 'Badllama' paper demonstrating that safety fine-tuning can be removed from Llama 2-Chat 13B for under $200, and epidemiological work on COVID-19 intervention effectiveness. He received a $7,900 grant from the Long-Term Future Fund in September 2020 to support a research period at Oxford while applying to AI alignment PhD programs, and has written an influential career guide on pursuing technical AI alignment research.
Links
- Personal Website
- -
- Twitter / X
- LessWrong
- charlie-rogers-smith
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 2:49 PM UTC
- Created
- Mar 20, 2026, 2:48 AM UTC