Cornell University is a private Ivy League research university founded in 1865 in Ithaca, New York. In the AI safety space, Cornell hosts several key researchers and labs. Angelina Wang's Responsible AI Lab at Cornell Tech focuses on AI fairness, evaluation of generative AI, and societal impacts of AI. Lionel Levine, a mathematics professor, conducts AI alignment research on value alignment and dispositional safety with Open Philanthropy support. Bart Selman and Joseph Halpern are co-founders and principal investigators at the Center for Human-Compatible AI (CHAI). Cornell also has an active student AI Alignment group (CAIA) and an Effective Altruism chapter focused on AI safety and existential risk.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Cornell's AI safety impact operates through multiple channels. Its faculty conduct foundational research on AI alignment (mathematical frameworks for value alignment, dispositional safety), responsible AI (fairness evaluation, societal impact analysis), and human-compatible AI (value learning, cooperative AI). By embedding this research within a top-tier university, Cornell trains the next generation of AI safety researchers through graduate programs, the Math for AI Safety course, and student organizations like CAIA. Faculty participation in cross-institutional initiatives like CHAI and AISIC extends their influence beyond campus. The Responsible AI Lab specifically works to improve how AI systems are evaluated for fairness and safety, ensuring that the benchmarks and norms shaping AI development reflect rigorous, multi-faceted measurement rather than oversimplified metrics.
Grants Received
from Open Philanthropy
Projects
Angelina Wang's research group at Cornell Tech focuses on responsible AI, studying fairness, evaluation methodologies, and societal impacts of AI systems to make them more equitable and accountable.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:50 PM UTC
- Created
- Mar 19, 2026, 6:22 PM UTC