Johannes C. Mayer
Bio
Johannes C. Mayer is an independent AI alignment researcher and game developer who has dedicated his career to addressing AI existential risk. He participates in the AI safety research community through LessWrong and the AI Alignment Forum, where he has published over 76 posts on topics including structural approaches to alignment, computational models of intelligence, and world model interpretability. He completed the MATS Summer 2022 cohort under the mentorship of Evan Hubinger, and has served as a mentor for the Supervised Program for Alignment Research (SPAR) at UC Berkeley. His research agenda focuses on translating intuitive concepts such as goals, wanting, and abilities into formal computational frameworks, and on constraining AI reasoning processes structurally rather than purely specifying outcome-level objectives. He received a grant from the Long-Term Future Fund to pursue this research on turning intuitions about intelligence into concepts applicable to computational systems.
Links
- Personal Website
- https://www.johannescmayer.com/
- Twitter / X
- LessWrong
- johannes-c-mayer
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 10:18 PM UTC
- Created
- Mar 20, 2026, 2:52 AM UTC