The Canadian Artificial Intelligence Safety Institute (CAISI) is a government body led by Innovation, Science and Economic Development Canada (ISED), with a mandate to advance the science of AI safety in collaboration with international partners. Operating through two research streams — an applied and investigator-led program delivered via CIFAR, and government-directed projects implemented by the National Research Council of Canada — CAISI conducts evaluations, develops safety methodologies, and provides guidance on AI risks to developers, users, and policymakers. It draws on expertise from Canada's three national AI institutes (Mila, Vector Institute, and Amii) and participates as a founding member of the International Network of AI Safety Institutes.
The Canadian Artificial Intelligence Safety Institute (CAISI) is a government body led by Innovation, Science and Economic Development Canada (ISED), with a mandate to advance the science of AI safety in collaboration with international partners. Operating through two research streams — an applied and investigator-led program delivered via CIFAR, and government-directed projects implemented by the National Research Council of Canada — CAISI conducts evaluations, develops safety methodologies, and provides guidance on AI risks to developers, users, and policymakers. It draws on expertise from Canada's three national AI institutes (Mila, Vector Institute, and Amii) and participates as a founding member of the International Network of AI Safety Institutes.
Funding Details
- Annual Budget
- $10,000,000
- Monthly Burn Rate
- $833,333
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
CAISI's theory of change holds that governments are better positioned to reduce risks from advanced AI when they have independent scientific capacity to evaluate and test AI systems. By building Canadian research expertise in AI safety evaluation, developing rigorous methodologies for assessing model risks, and sharing findings through the International Network of AI Safety Institutes, CAISI aims to provide the technical foundation that enables evidence-based AI governance. The institute's dual research streams — academic research via CIFAR and applied government work via the NRC — are designed to ensure that both cutting-edge knowledge and practical tools flow into policy decisions, reducing information asymmetry between governments and AI developers and strengthening global norms around safe AI deployment.
Grants Received– no grants recorded
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 7, 2026, 8:29 PM UTC
- Created
- Apr 7, 2026, 6:27 PM UTC