
Berryville Institute of Machine Learning
The Berryville Institute of Machine Learning (BIML) is a Virginia-based 501(c)3 nonprofit that conducts and publishes research on the security risks inherent in machine learning systems. Rather than using ML as a security tool, BIML takes a security engineering approach to ML itself, analyzing both accidental vulnerabilities and potential for intentional misuse. Its most influential outputs include the BIML-78 framework (78 ML security risks across all ML process models, published 2020) and a 2024 architectural risk analysis of large language models identifying 81 LLM-specific risks. BIML aims to give developers, engineers, and policymakers accessible, rigorous frameworks for building safer AI systems.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $150,000
- Fiscal Sponsor
- -
Theory of Change
BIML believes that many of the most serious risks from ML and AI systems arise from security vulnerabilities baked in during design and development, not just from deployment failures or misuse after the fact. By rigorously cataloguing these risks using proven security engineering methods (such as architectural risk analysis) and publishing accessible frameworks, BIML aims to equip ML developers and engineers with the knowledge to build more secure systems from the ground up. The causal chain is: identify and systematize ML security risks -> disseminate findings widely through publications, frameworks, and talks -> practitioners adopt security-aware design practices -> ML systems are built with fewer exploitable flaws -> reduced likelihood of harmful misuse, catastrophic failures, or unintended consequences at scale.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:59 PM UTC
- Created
- Mar 20, 2026, 2:35 AM UTC