Alfred Harwood
Bio
Alfred Harwood is a researcher working at the intersection of agent foundations and AI safety, based in the UK. He studied Natural Sciences before completing a PhD in physics at University College London, where his thesis focused on coherent and measurement-based feedback in quantum mechanics. In January 2024 he attended AI Safety Camp, which sparked his interest in agent foundations research. He subsequently received a grant from the Long-Term Future Fund to research geometric rationality, ergodicity economics, and their applications to decision theory and AI, and published philosophical work on geometric averaging in consequentialist ethics. He is a co-leader of the Dovetail Research Fellowship alongside Alex Altair, an agent foundations research program funded by ARIA, where he mentors fellows working on mathematical AI safety. On LessWrong and the Alignment Forum he has written on the Good Regulator Theorem, the Internal Model Principle, and related selection-theorem topics in agent foundations.
Links
- Personal Website
- -
- -
- Twitter / X
- -
- LessWrong
- alfred-harwood
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 2:07 PM UTC
- Created
- Mar 20, 2026, 2:47 AM UTC