
Computational and Biological Learning Lab (CBL)
The Computational and Biological Learning Lab (CBL) at Cambridge applies engineering and mathematical methods to two intertwined goals: understanding the computational principles underlying biological intelligence, and building more capable and reliable artificial learning systems. The lab's research spans computational neuroscience, Bayesian machine learning, Gaussian processes, probabilistic programming, deep learning, and sensorimotor control. CBL is home to the Cambridge Machine Learning Group (MLG) and includes research groups led by prominent faculty such as Zoubin Ghahramani, Richard Turner, Carl Edward Rasmussen, Máté Lengyel, Guillaume Hennequin, and José Miguel Hernández-Lobato. AI safety-relevant work at CBL includes research on uncertainty quantification, alignment, and human-AI interaction, including contributions from David Krueger (formerly) and PhD students working on LLM safety and cooperative AI.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
CBL's implicit theory of change for AI safety operates primarily through foundational research. By advancing probabilistic and Bayesian machine learning, the lab develops tools for principled uncertainty quantification — enabling AI systems to know what they don't know, which is essential for safe deployment in high-stakes settings. The lab also trains researchers (PhD students, postdocs) who go on to work directly on AI safety problems. Through faculty like David Krueger (formerly) and research on alignment, goal misgeneralization, and LLM safety, CBL contributes technical work aimed at understanding how to build AI systems whose behavior reliably reflects human intentions. The broader hypothesis is that progress on the mathematical and statistical foundations of machine learning — including understanding generalization, robustness, and inference under uncertainty — reduces the likelihood of dangerous failures in future AI systems.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC