Harmony Intelligence works with leading AI labs, enterprises, and governments to rigorously evaluate frontier AI systems and build defensive cybersecurity infrastructure against AI-powered threats. Their core focus is on identifying dangerous AI capabilities across domains including cybersecurity, biosecurity, persuasion, and self-exfiltration. The company has contributed to safety evaluations for the International Network of AI Safety Institutes, published three AI safety papers cited in the 2025 International AI Safety Report, and briefed the Australian Senate on emerging AI risks. They also received an Open Philanthropy grant to develop a publicly available benchmark on the autonomous moneymaking capabilities of LLM agents.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $3,469,625
- Fiscal Sponsor
- -
Theory of Change
Harmony Intelligence believes that catastrophic AI risk can be reduced by rigorously measuring and exposing dangerous AI capabilities before they are exploited. By conducting capability evaluations and red teaming of frontier models across high-stakes domains (cybersecurity, biosecurity, persuasion, self-exfiltration), they provide empirical evidence of which AI systems pose genuine risks, informing AI lab safety decisions, government policy, and international governance efforts. On the defensive side, by building AI-powered cybersecurity products that match the speed and sophistication of AI-enabled attackers, they reduce the attack surface that malicious actors could exploit through advanced AI systems. Together, these activities aim to ensure that dangerous AI capabilities are identified, disclosed, and defended against before they cause irreversible harm.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:52 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC