Geodesic Research pursues what it calls "the shortest path to impact" in AI safety by training models end-to-end and measuring how various techniques affect safety and alignment. Unlike organizations focused primarily on evaluations or control research, Geodesic implements training methods directly and studies their effects. Their three main research areas are Alignment Pretraining, Obfuscated Reasoning, and Generalisation Hacking. The organization is based in Cambridge, UK, and is closely connected to the University of Cambridge AI safety community.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Meridian Impact CIC
Theory of Change
Geodesic Research believes that directly implementing and testing AI safety and alignment training methods is a more direct path to reducing existential risk than evaluation-focused or purely theoretical approaches. By training models end-to-end with various safety techniques and rigorously measuring the effects, the organization aims to generate empirical findings that can inform how frontier AI labs train safer systems. Their research on obfuscated reasoning specifically addresses the risk that models under optimization pressure learn to hide unsafe reasoning from human monitors, which would undermine a key proposed safety mechanism. If this obfuscation generalizes across domains, it would mean that any optimization pressure on chain-of-thought could compromise AI oversight, making their findings directly relevant to near-term safety of advanced AI systems.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:52 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC