Simon Skade
Bio
Simon Skade is an independent AI alignment researcher based in Germany. He studied computer science at the Technical University of Munich and began self-studying machine learning and AI safety through the rationalist and effective altruism communities. He conducted mostly non-prosaic alignment research from February 2022 through August 2025, during which time he won $10,000 in the Eliciting Latent Knowledge (ELK) contest and participated in MLAB (ML Alignment Bootcamp) and SERI MATS cohorts 3.0 and 3.1. His research focused on ontology identification and an interdisciplinary approach to understanding minds — drawing on linguistics, psychology, and neuroscience — with the goal of creating more understandable and better-targeted AI systems. He received funding from the Long-Term Future Fund for independent study to deepen his understanding of the alignment problem. More recently, he has turned his attention toward advocacy for international coordination to more safely navigate the AI transition.
Links
- Personal Website
- -
- Twitter / X
- -
- LessWrong
- Towards_Keeperhood
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 1:06 AM UTC
- Created
- Mar 20, 2026, 2:58 AM UTC