Charlie Steiner
Bio
Charlie Steiner is an independent AI alignment researcher based in Boston, MA, focused on the problem of value learning. He holds a PhD in condensed matter physics and transitioned into AI safety research, where he works on making conceptual progress on value learning and translating that progress into experiments using language models and model-based reinforcement learning. A particular focus of his work is how to translate values and policies between different learned ontologies, with the goal of modeling human preferences—including higher-order preferences—in a principled rather than ad hoc way. He is an active contributor to LessWrong and the Alignment Forum (where his LW 1.0 username was Manfred), with over 75 posts and significant community engagement. He has received funding from the Long-Term Future Fund for a 12-month independent research salary to pursue value learning research. He also appears on the Future of Life Institute's community pages as an independent researcher in the AI safety space.
Links
- Personal Website
- -
- Twitter / X
- -
- LessWrong
- charlie-steiner
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 2:50 PM UTC
- Created
- Mar 20, 2026, 2:48 AM UTC