University of British Columbia
Professor Jeff Clune leads an AI safety and alignment research group within the University of British Columbia's Department of Computer Science. His work spans deep learning, deep reinforcement learning, meta-learning, quality-diversity algorithms, and AI-generating algorithms (AI-GAs). He has increasingly shifted focus toward AI safety, interpretability, and governance, and received Open Philanthropy funding in 2023 to support research on AI alignment, safety, and existential risk. He also holds a Canada CIFAR AI Chair at the Vector Institute and serves as a Senior Research Advisor at DeepMind.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Clune believes that the path to safe and beneficial AI runs through both technical and governance channels. Technically, his lab works on AI interpretability (understanding what is happening inside neural networks), alignment research (ensuring AI systems pursue intended goals), and open-ended AI systems that can be better understood and steered. On the governance side, he advocates for regulation of the most powerful frontier models and international coordination. By training researchers in these areas and producing scientific results, his lab aims to contribute to a body of knowledge and talent that makes the development of powerful AI and AGI go well for humanity.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:54 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC