Jaeson Booker
Bio
Jaeson Booker is an independent AI alignment researcher and the founder and fund manager of the AI Safety Research Fund, a nonprofit initiative dedicated to AI safety grantmaking. He has a background as a software engineer, startup founder, and senior cybersecurity analyst auditing blockchain contracts. He completed the AGI Safety Fundamentals courses (both Technical and Governance tracks), participated in SERI MATS under the Agent Foundations stream, and took part in AI Safety Camp (Group 22), where his team studied the promisingness of automating alignment research. He served as a senior executive at the Center for AI Responsibility and Education, where he developed curriculum for an introductory course in AI risk and alignment, and was a resident at CEEALAR (Centre for Enabling EA Learning and Research) where he worked on AI safety strategy and research projects. He runs the AI Safety Papers Substack (formerly the Alignment Research Newsletter), covering the latest work in alignment, interpretability, and AI safety. His research interests center on collective intelligence systems for alignment, mechanism design for AI safety, and multi-agent alignment.
Links
- Personal Website
- https://aisafetyfund.org/
- Twitter / X
- LessWrong
- -
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 4:41 PM UTC
- Created
- Mar 20, 2026, 2:52 AM UTC