Aryeh Englander
Bio
Aryeh Englander is a mathematician and AI researcher at the Johns Hopkins University Applied Physics Laboratory (APL), where he focuses on AI safety and AI risk analysis. He is also pursuing a PhD in Information Systems at the University of Maryland, Baltimore County (UMBC), with research centered on decision and risk analysis under extreme uncertainty, particularly regarding potential existential risks from very advanced AI. In 2021, he received a $100,000 grant from the Long-Term Future Fund to replace income lost from reducing to half-time at APL in order to pursue his doctorate, with the rationale that a PhD would position him for greater leadership and influence over AI safety practices at a major federal research institution. He is a co-leader of the Modelling Transformative AI Risks (MTAIR) project alongside David Manheim and Daniel Eth, and co-authored the paper TanksWorld: A Multi-Agent Environment for AI Safety Research. Englander is an active contributor to the AI Alignment Forum, LessWrong, and EA Forum communities.
Links
- Personal Website
- -
- Twitter / X
- LessWrong
- alenglander
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 2:27 PM UTC
- Created
- Mar 20, 2026, 2:48 AM UTC