Modulo Research Ltd is a small research organization based in Cambridge, UK, directed by cognitive scientist Dr. Gabriel Recchia. Its mission is to increase the probability that advanced AI leads to net positive long-run outcomes for society through empirical research, model evaluations, and open data. The organization works across three areas: evaluating LLM capabilities and risks, testing scalable alignment techniques such as LLM sandwiching, and creating annotated datasets to support the broader AI safety research community.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $408,255
- Fiscal Sponsor
- -
Theory of Change
Modulo Research believes that a key bottleneck to safe AI development is the lack of robust methods and tools for overseeing increasingly capable AI systems. By conducting empirical research into scalable oversight techniques (such as LLM sandwiching, debate, and critique), releasing expert-annotated benchmark datasets for evaluating these approaches, and publishing findings openly, Modulo aims to give the AI safety research community better tools to measure and improve the reliability of AI supervision. Insights from capability evaluations also directly inform companies and policymakers, enabling better-grounded decisions about AI deployment and regulation. The causal chain is: rigorous evaluation and dataset development -> stronger oversight techniques -> more capable organizations and policymakers -> safer AI deployment decisions -> reduced risk of catastrophic outcomes from advanced AI.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 21, 2026, 10:14 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC