Centre for International Governance Innovation
The Centre for International Governance Innovation (CIGI) is headquartered in Waterloo, Ontario, Canada, and operates as an endowment-funded, non-partisan think tank examining the governance of technology, data, and global challenges. CIGI's work spans four research pillars under its 2025-2030 Strategic Plan: AI and transformative technology; data, economy and society; digitalization, security and democracy; and global cooperation and governance. Its Global AI Risks Initiative, led by Executive Director Duncan Cass-Beggs, focuses specifically on developing international frameworks and fostering multilateral cooperation to prevent and mitigate catastrophic and global-scale risks from advanced AI. CIGI convenes policymakers, researchers, and civil society globally and publishes peer-reviewed papers, policy briefs, and special reports to shape governance responses to emerging AI threats.
Funding Details
- Annual Budget
- $8,000,000
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
CIGI believes that the most dangerous risks from advanced AI are fundamentally global in nature and cannot be managed by any single government or technical community acting alone. The causal chain begins with CIGI producing rigorous, independent research that clarifies what global-scale AI challenges actually require in terms of international cooperation, including both technical AI safety breakthroughs and governance breakthroughs. CIGI then convenes policymakers, AI researchers, civil society, and international institutions to build consensus around specific governance mechanisms. By working through multilateral venues such as the G7, the UN, and bilateral policy dialogues, CIGI aims to catalyze the creation of international agreements, treaty frameworks, and institutional arrangements that can verify AI safety standards, coordinate responses to AI-related emergencies, and create legitimate processes for making shared decisions about how advanced AI is developed and deployed. The theory holds that absent such international frameworks, competitive pressures among states and between companies will drive unsafe AI development, and that well-designed governance institutions are a necessary complement to technical AI safety work.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:52 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC