Vael Gates
Bio
Vael Gates is a researcher and field-builder focused on improving the safety of AI systems to reduce potential large-scale risks from advanced AI. They completed a PhD in Computational Cognitive Science at UC Berkeley in Tom Griffiths's lab, working on computational models of social cognition, and subsequently held a postdoctoral fellowship at Stanford University jointly with HAI (Human-Centered AI Institute) and CISAC (Center for International Security and Cooperation). During their Stanford postdoc, they conducted structured interviews with nearly 100 AI researchers about their perceptions of risks from current and future AI systems, publishing results as the "Risks from Advanced AI" (2022) and "Risks from Highly-Capable AI" (2023) series on the EA Forum and LessWrong. They previously founded Arkose, an AI safety field-building nonprofit providing informational resources and support calls to machine learning researchers and engineers interested in entering the field (closed June 2025). They subsequently served as Head of Content at FAR.AI, leading content programming for the FAR.Futures division including global conferences, workshops, and community engagement. As of early 2026, Vael is the Founder and Executive Director of Humans in Control, a bipartisan grassroots organization focused on protecting communities from risks of unchecked AI.
Links
- Personal Website
- https://vaelgates.com/
- Twitter / X
- LessWrong
- vael-gates
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 1:51 AM UTC
- Created
- Mar 20, 2026, 2:59 AM UTC