Mateusz Bagiński
Bio
Mateusz Bagiński is a Polish AI safety researcher currently based in Tallinn, Estonia. He holds a BSc and MSc in cognitive science, and previously worked as a programmer at a startup developing software for enhancing collective sense-making. He transitioned into technical AI safety research after completing his dissertation, receiving a Long-Term Future Fund grant to skill up and gain experience working on AI safety full-time. In 2024, he was a PIBBSS Fellow mentored by Tsvi Benson-Tilsen (ex-MIRI), where he conducted a conceptual investigation of the core drivers of goal-achieving mental activity using the hermeneutic net method, presenting preliminary results at the PIBBSS Symposium '24 under the title "Fixing our concepts to understand minds and agency." His research focus is on theoretical and agent foundations work. He is active on LessWrong and the EA Forum, and has co-authored posts on AI safety policy including arguments for why safety-concerned researchers at capabilities labs should speak out publicly. He is the organizer of the AFFINE Superintelligence Alignment Seminar, a five-weekend intensive program in Hostačov, Czech Republic bringing together approximately 35 participants with leading mentors in the field.
Links
- Personal Website
- -
- Twitter / X
- LessWrong
- mateusz-baginski
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 11:25 PM UTC
- Created
- Mar 20, 2026, 2:54 AM UTC