Rauno Arike
Bio
Rauno Arike is an Estonian AI safety researcher and co-founder of Aether, an independent LLM agent safety research group based in London. He studied Computer Science and Physics at Delft University of Technology (TU Delft, 2021-2024), where he also co-founded an AI alignment university group, and is pursuing an MSc in Artificial Intelligence at the University of Amsterdam (2024-2026). Prior to his research career he worked as a software engineer. He was a MATS Summer 2024 alumnus, working in Marius Hobbhahn's stream alongside Elizabeth Donoway on goal-directedness evaluations for LLMs, and has also contracted with the UK AI Safety Institute. At Aether, his research focuses on chain-of-thought monitorability and the safety of LLM agents, and he co-authored a 2025 technical report evaluating goal drift in language model agents (arXiv:2505.02709).
Links
- Personal Website
- -
- Twitter / X
- LessWrong
- rauno-arike
Grants
No grants recorded.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 12:32 AM UTC
- Created
- Mar 20, 2026, 3:01 AM UTC