Samuel Brown
Bio
Sam F. Brown is an independent AI alignment researcher based in Oxford, UK. He has a background in physics and programming, and previously worked at a climate technology startup before pivoting to full-time alignment research. He has received two grants from the EA Long-Term Future Fund: an initial six-month grant (approximately £40,000) for research on goal-inference and choice-maximisation, and a subsequent twelve-month grant ($82,298) to research technical approaches to value lock-in and minimal paternalism. His work explores empowerment-based alignment — the idea of maximising humans' capacity to reach diverse future outcomes rather than inferring and locking in specific human values. He has published research essays on LessWrong and the EA Forum, including "The Empowerment of Others" and "Questions about Value Lock-in, Paternalism, and Empowerment". He is connected to the Oxford rationalist and EA community and works from spaces including Trajan House, the Centre for Effective Altruism's Oxford building.
Links
- Personal Website
- https://sambrown.eu/
- -
- Twitter / X
- -
- LessWrong
- sam-f-brown
Grants
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 12:58 AM UTC
- Created
- Mar 20, 2026, 2:57 AM UTC