Axel Højmark
Bio
Axel Højmark is an AI safety researcher and Member of Technical Staff at Apollo Research, where he works on evaluations for AI scheming and deception. He studied Computer Science at the University of Copenhagen (DIKU), where his bachelor's thesis on AI-generated social media content was accepted at the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP). He subsequently participated in the MATS Summer 2024 cohort, researching improved methods for evaluating capabilities of LM agents and agent scaling laws under the mentorship of Jérémy Scheurer and Marius Hobbhahn from Apollo Research. His key publications include "Forecasting Frontier Language Model Agent Capabilities" (2025), which evaluated six forecasting methods for predicting downstream LLM agent performance, and "Stress Testing Deliberative Alignment for Anti-Scheming Training" (2025), a collaboration with OpenAI examining mitigations for covert AI misbehavior. He has received funding from the Long-Term Future Fund to support his research on agent scaling laws and the relationships between training compute and agent capabilities.
Links
- Personal Website
- -
- Twitter / X
- LessWrong
- axel-hojmark
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 2:27 PM UTC
- Created
- Mar 20, 2026, 2:48 AM UTC