Luan Ademi
Bio
Luan Ademi is a machine learning researcher and student at the Karlsruhe Institute of Technology (KIT) in Baden-Württemberg, Germany, affiliated with the KASTEL Center for Applied Security Technology. His research focuses on AI interpretability, including feature attribution methods and neural network transparency. He is the author of toumei, an open-source interpretability library for PyTorch that implements feature visualization, causal tracing, and feature attribution techniques. He co-authored the paper "POMELO: Black-Box Feature Attribution with Full-Input, In-Distribution Perturbations" alongside Maximilian Noppel and Christian Wressnegger, presented at the 3rd World Conference on eXplainable Artificial Intelligence in 2025. His work on interpretability methods is relevant to AI safety through building tools that make neural network behavior more transparent and understandable.
Links
- Personal Website
- https://github.com/LuanAdemi
- -
- Twitter / X
- -
- LessWrong
- -
Grants
No grants recorded.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 11:03 PM UTC
- Created
- Mar 20, 2026, 3:00 AM UTC