Nick Hay
Bio
Nick Hay is an AI alignment researcher and co-founder of Encultured AI, an alignment-focused startup developing platforms for AI safety experiments. He holds a PhD from UC Berkeley (2015), where he studied metalevel control under Professor Stuart Russell, applying reinforcement learning and Bayesian analysis to how agents can learn to control their own computations. Before co-founding Encultured AI, he spent five years at Vicarious AI working on AGI approaches grounded in robotics and served as a technical researcher at the Machine Intelligence Research Institute (2017-2021). In 2021 he received a $150,000 grant from the Long-Term Future Fund to design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment, working as a visiting scholar at CHAI advised by Andrew Critch and Stuart Russell. His research interests span reinforcement learning, value alignment, and using cultural acquisition dynamics as a lens for understanding how AI systems can learn human-compatible behavior. He first engaged with AI safety upon reading Eliezer Yudkowsky's Creating Friendly AI in 2001, interning at MIRI in 2006 and attending the Singularity Summit in 2007.
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 11:59 PM UTC
- Created
- Mar 20, 2026, 2:56 AM UTC