Jérémy Scheurer
Jérémy Scheurer is a Research Scientist at Apollo Research focused on evaluating frontier AI systems for deceptive capabilities and misaligned behavior. His work centers on detecting scheming, situational awareness, and strategic deception in large language models. He was an early member of Apollo Research and previously collaborated with Ethan Perez at FAR.AI and NYU on aligning language models to human preferences, and contracted with OpenAI's dangerous capabilities evaluations team. He holds an MS in Computer Science from ETH Zurich and received an Open Philanthropy grant for independent AI alignment research.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Scheurer's theory of change rests on the premise that evaluations are the critical bottleneck for safe AI deployment. By rigorously measuring whether frontier models exhibit deceptive capabilities, scheming behaviors, or situational awareness, researchers can identify dangerous capabilities before deployment, inform safety cases, and give developers and policymakers the empirical grounding needed to make deployment decisions. His work provides evidence that current frontier models already exhibit concerning behaviors (strategic deception, in-context scheming), which creates pressure on labs to take safety precautions. Developing robust, public evals makes safety standards concrete and verifiable rather than aspirational.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 21, 2026, 9:30 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC