Darryl Wright
Bio
Darryl Wright is an AI safety researcher and advisor who received a Long-Term Future Fund grant to conduct independent AI safety research, focusing on two projects: penalizing neural networks for learning polysemantic neurons, and crowdsourcing alignment research from volunteers. He holds a Master's in Physics and a PhD in Astrophysics from Queen's University Belfast, where his doctoral research applied machine learning to transient survey classification using data from the Pan-STARRS1 survey. He subsequently held research positions at the University of Minnesota's Minnesota Institute for Astrophysics, the University of Oxford, and Mayo Clinic Rochester, where he worked on machine learning applications in healthcare. He also contributed to the Zooniverse citizen science platform, investigating how AI and citizen scientists can cooperate on data-intensive research problems. He has served as a mentor for the Supervised Program for Alignment Research (SPAR) and is currently an AI policy and law advisor at Successif.
Links
- Personal Website
- https://dr-darryl-wright.github.io/
- Twitter / X
- LessWrong
- -
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 3:26 PM UTC
- Created
- Mar 20, 2026, 2:50 AM UTC