Matt MacDermott
Bio
Matt MacDermott is an AI safety researcher currently serving as an Astra Fellow on the technical AI safety team at Coefficient Giving. He completed his PhD at Imperial College London as part of the Centre for Doctoral Training in Safe and Trusted AI, under the supervision of Dr Francesco Belardinelli, focusing on techniques for safe reinforcement learning and the foundations of goal-directed agency. During his PhD, he worked with the Causal Incentives Working Group and was previously a research scientist at LawZero. He is a SERI MATS alumnus and has received multiple grants for AI alignment research. His published work includes "Measuring Goal-Directedness" (NeurIPS 2024 Spotlight), "Discovering Agents" (Artificial Intelligence journal, 2023), a Best Paper Award at TARK 2023 for work on multi-agent influence diagrams, and a 2025 co-authored paper with Yoshua Bengio on catastrophic risks from superintelligent agents.
Links
- Personal Website
- https://mattmacdermott.com/
- Twitter / X
- LessWrong
- mattmacdermott
Grants
from Long-Term Future Fund
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 11:25 PM UTC
- Created
- Mar 20, 2026, 2:54 AM UTC