At Johns Hopkins University, the AI safety-relevant work centers on Prof. Anqi (Angie) Liu's research group in the Department of Computer Science at the Whiting School of Engineering. Her group develops principled machine learning algorithms aimed at building reliable, trustworthy, and human-compatible AI systems, with particular emphasis on robustness to distribution shift, accurate uncertainty estimation, and alignment with human values. Liu is a member of the AI Existential Safety Community at the Future of Life Institute, and her research has received support from Open Philanthropy, Amazon, and Johns Hopkins internal grants. Her group collaborates with social scientists on trustworthy social media and intelligent tutoring systems.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Liu's approach holds that near-term and long-term AI risks are substantially driven by AI systems that are brittle under distribution shift, poorly calibrated in their uncertainty, and misaligned with human values. By developing rigorous mathematical methods for robust learning, uncertainty quantification, and conformal prediction, the group aims to give practitioners and researchers the tools to build AI systems that fail safely, honestly represent their own limitations, and remain reliable in deployment environments that differ from training conditions. The work primarily targets the technical foundations of trustworthy AI rather than policy, with the belief that principled ML algorithms are a prerequisite for safe, high-stakes AI deployment.
Grants Received
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:53 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC