The Secure AI Project is a 501(c)(4) nonprofit founded in December 2024 by Nick Beckstead and Thomas Woodside, headquartered in San Francisco. The organization advocates for legal requirements that major AI developers publish safety and security protocols, for whistleblower protections for those revealing unsafe practices, and for clear incentives to mitigate risk in accordance with industry best practices. Their work spans state-level legislative advocacy across multiple states, with a focus on transparency requirements for frontier AI developers.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The Secure AI Project believes that the largest AI developers need legally binding transparency and safety requirements to adequately manage the severe risks posed by frontier AI systems. By advocating for state and federal legislation mandating safety protocol disclosure, whistleblower protections, and adherence to industry best practices, they aim to create a regulatory environment where the biggest AI companies are held publicly accountable for their safety practices. Their theory is that transparency requirements will drive better safety behavior among frontier AI developers, that whistleblower protections will surface safety concerns that might otherwise be suppressed, and that these combined forces will reduce the probability of catastrophic harm from advanced AI.
Grants Received
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:49 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC