Eli Tyre
Bio
Eli Tyre is an AI safety professional based in Berkeley, California, who has been professionally focused on reducing existential risk from advanced AI since 2015. He currently serves as Head of Strategy at Palisade Research, where he briefs policymakers and the public about the intelligence explosion and AI loss-of-control risks, and works part-time as a grant round facilitator at Jaan Tallinn's Survival and Flourishing Fund. Previously, he was a curriculum developer and instructor at the Center for Applied Rationality (CFAR) from 2016 onward, where he became the primary developer of the Double Crux technique for epistemically resolving disagreements. He has also done contract work for Lightcone Infrastructure, the Machine Intelligence Research Institute (MIRI), and the Berkeley Existential Risk Initiative (BERI). He is an active contributor to LessWrong and the Alignment Forum, and maintains a personal blog, Musings and Rough Drafts.
Links
- Personal Website
- https://elityre.com/
- Twitter / X
- LessWrong
- elityre
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 3:45 PM UTC
- Created
- Mar 20, 2026, 2:50 AM UTC