Dioptra
Dioptra is a volunteer-run research group focused on developing evaluations (evals) for AI safety. Founded by Joshua Clymer, a researcher who has worked at METR and Redwood Research, the group consists of roughly 21 students and engineers who contribute their time voluntarily. The group's work includes benchmark development for assessing AI capabilities and behaviors, including contributions to the GameBench paper evaluating strategic reasoning in LLM agents, for which Dioptra received retroactive funding from the EA Funds Long-Term Future Fund in 2024.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $9,072
- Fiscal Sponsor
- -
Theory of Change
Dioptra aims to reduce risks from advanced AI by developing rigorous evaluations that can measure AI capabilities and behaviors, particularly those relevant to safety. By producing benchmarks and evals, the group contributes to the foundation needed for credible safety cases — structured arguments that AI systems are unlikely to cause catastrophic outcomes. Better evaluations enable developers, policymakers, and oversight bodies to make more informed decisions about AI deployment and control, thereby reducing the probability of catastrophic failures from advanced AI systems.
Grants Received
from Long-Term Future Fund
from Long-Term Future Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 21, 2026, 7:39 PM UTC
- Created
- Mar 19, 2026, 10:42 PM UTC