Daniel Kang is an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign (UIUC), where his research focuses on measuring and understanding dangerous capabilities of AI agents. He develops benchmarks and evaluations to assess how well AI agents can perform real-world cybersecurity attacks and other consequential tasks. His work—including CVE-Bench, InjecAgent, and the Agent Benchmark Checklist—has been adopted by frontier AI labs, government bodies, and safety organizations. He is funded by Open Philanthropy, Schmidt Sciences, Google, NSF, and Emergent Ventures, and mentors researchers through the MATS program.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $945,000
- Fiscal Sponsor
- -
Theory of Change
Kang's theory of change is that demonstrating concrete, measurable dangerous capabilities of AI agents compels frontier labs, governments, and safety researchers to address specific vulnerabilities. By building rigorous benchmarks that quantify how well AI can hack systems, inject malicious prompts, and exploit real-world vulnerabilities, he creates accountability mechanisms and provides the empirical grounding necessary for effective governance and alignment work. Measuring the problem precisely is a prerequisite for solving it: if labs and regulators can see exactly how capable AI agents are at dangerous tasks, they are more likely to prioritize defenses and safety measures.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:54 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC