Can We Secure AI With Formal Methods? (formerly Progress in Guaranteed Safe AI) is an independent newsletter published by Quinn Dougherty, a Research Engineer at the Beneficial AI Foundation based in Berkeley, CA. The newsletter's mission is to help formal methods researchers understand that AI security practitioners are a critical user base for their tools, and to help AI security practitioners understand how to engage with and request tools from the formal methods community. It covers verification benchmarks, hardware verification, program synthesis, specification elicitation, and related research developments, publishing every one to three months.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The newsletter operates on the theory that one barrier to progress in AI safety via formal methods is a communication gap: formal methods researchers do not know that AI security is a high-value application domain, and AI security practitioners do not know how to articulate their needs to formal methodsitians. By translating across these communities, tracking relevant research, and disseminating practical information about tools and opportunities, the newsletter aims to accelerate adoption of formal verification techniques in AI security. The approach treats formal verification as one layer in a defense-in-depth strategy rather than a panacea, acknowledging that verification provides strong guarantees within a specified scope but does not eliminate all real-world risks.
Grants Received
from Long-Term Future Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 19, 2026, 10:31 PM UTC