Codruta Lugoj
Bio
Codruta (Coco) Lugoj is an independent AI safety researcher based in London, UK. She holds a master's degree in AI and has a background in machine learning research, with particular expertise in reinforcement learning and probabilistic and variational inference, developed during her time at Radboud University. She received Long-Term Future Fund grants in 2023 and 2024 to build alignment research engineering skills and to continue her work evaluating agent self-improvement capabilities. As part of the Axiom Futures Alignment Research Fellowship, she co-authored "Auto-Enhance: Towards a Meta-Benchmark to Evaluate AI Agents' Ability to Improve Other Agents," presented at NeurIPS 2024, which proposes a meta-benchmark for measuring the ability of LLM agents to modify and improve other agents across tasks including prompt injection resiliency, dangerous knowledge unlearning, and solving real GitHub issues. Her research sits at the intersection of AI safety evaluation and agentic AI capabilities.
Links
- Personal Website
- -
- Twitter / X
- -
- LessWrong
- multidiversional_
Grants
from Long-Term Future Fund
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 3:02 PM UTC
- Created
- Mar 20, 2026, 2:49 AM UTC