Straumli builds technical infrastructure for AI safety auditing, with a focus on evaluating models for dual-use risks such as biowarfare and cyberwarfare capabilities. The company offers the world's first self-serve evaluation suite for AI misuse, allowing teams of any scale to assess their models against a range of threat categories. Alongside commercial auditing services, Straumli develops cryptographic coordination protocols — such as Hashmarking — that enable developers, auditors, regulators, and domain experts to collaborate on capability benchmarks without exposing sensitive reference solutions.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $400,000
- Fiscal Sponsor
- -
Theory of Change
Straumli believes that a key bottleneck in AI safety is the lack of shared infrastructure for evaluation, auditing, and coordination among the parties responsible for governing AI. By building cryptographic protocols and self-serve evaluation tools, Straumli aims to lower the cost of rigorous capability assessment and make it possible for developers, auditors, and regulators to coordinate without requiring mutual trust or sharing sensitive information. This infrastructure is intended to enable new governance mechanisms — such as mandatory auditing regimes and capability thresholds — that are currently not tractable due to coordination failures. Reduced AI misuse risk follows from better detection of dangerous capabilities before deployment.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 12:34 AM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC