Robert Kralisch
Bio
Robert Kralisch is an independent conceptual and theoretical AI alignment researcher with a background in cognitive science. He became interested in AI safety in 2014 after reading Nick Bostrom's Superintelligence and later pursued both computer science and cognitive science before leaving formal academia to focus on independent alignment research. He completed the AI Safety Fundamentals course in 2021 and has since received funding from the Long-Term Future Fund for independent research. His work centers on three main areas: conceptual clarity around notions of agency, intelligence, and embodiment; the development of more inherently interpretable cognitive architectures (including his Prop-room and Stage Cognitive Architecture); and Simulator theory as an alternative framework for understanding large language models. He also serves as a research coordinator and organizer for AI Safety Camp, where he evaluates and supports conceptually sound alignment research projects.
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 12:33 AM UTC
- Created
- Mar 20, 2026, 2:57 AM UTC