Richard Ngo
Bio
Richard Ngo is an independent AI safety researcher and philosopher. He grew up in Vietnam and New Zealand and studied computer science and philosophy at the University of Oxford (BA, 2017), then earned a master's degree in machine learning from the University of Cambridge (2018). He began a PhD in the philosophy of machine learning at Cambridge, examining parallels between AI development and human cognitive evolution, before leaving the program in 2021. He was a research engineer on the AGI safety team at DeepMind (2018-2020), with a prior internship at the Future of Humanity Institute at Oxford. From 2021 to November 2024 he worked as a research scientist on the governance team at OpenAI, focusing on forecasting AI capabilities and risks, before departing over concerns about the organization's direction. He is best known for the essay series "AGI Safety from First Principles" (2020), co-authoring "The Alignment Problem from a Deep Learning Perspective", and designing the widely-used AGI Safety Fundamentals curriculum. He is an active contributor to the AI Alignment Forum and LessWrong under the handle ricraz.
Links
- Personal Website
- https://www.richardcngo.com/
- Twitter / X
- LessWrong
- ricraz
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 12:33 AM UTC
- Created
- Mar 20, 2026, 2:57 AM UTC