Fabian Schimpf
Bio
Fabian Schimpf is an independent AI alignment researcher based in Stuttgart, Germany, supported by a grant from the Long-Term Future Fund. He received the grant to upskill into AI alignment research and conduct independent research on the limits of predictability, with mentorship from Andrea Iannelli at the University of Stuttgart. His research focus is on improving robustness in deep learning and using insights from that field to advance interpretability as a path toward ensuring AI robustly benefits humanity. He has a background in aerospace engineering from the University of Stuttgart, where he worked on autonomous soaring and asteroid exploration at the Flight Mechanics and Controls lab and completed an internship at NASA. He has contributed to approximately ten publications spanning aerospace and machine learning topics. He is active on LessWrong and the AI Alignment Forum under the handle 'fasc', where he has written on robustness in AI alignment and co-authored work on negative side effect minimization as part of an AI Safety Camp project.
Links
- Personal Website
- https://schimpffabian.github.io/
- Twitter / X
- LessWrong
- fasc
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 3:59 PM UTC
- Created
- Mar 20, 2026, 2:50 AM UTC