Angelina Wang's Responsible AI Lab
About
Angelina Wang's research group at Cornell Tech focuses on responsible AI, tackling critical questions about fairness, evaluation, and societal impacts of AI systems. Wang joined Cornell Tech as an Assistant Professor of Information Science in Fall 2025, with field faculty appointments in Computer Science and Data Science. The lab is based at Cornell Tech on Roosevelt Island in New York City. Wang's research spans three interconnected areas. In AI fairness, she investigates how to move beyond one-size-fits-all, mathematically convenient notions of fairness that correlate poorly with real-world constructs. Her work has shown how technical approaches often oversimplify social concepts through harmful abstractions, and she develops more equitable formulations that engage with societal context. In AI evaluation, she develops multi-faceted measurement approaches for generative AI systems, particularly focusing on how to evaluate systems whose behavior varies depending on who is interacting with them and in what context. In societal impacts, she examines AI's effects on humanity through frameworks including social epistemology and power dynamics, studying issues like predictive systems and large language model-based human simulation. Wang earned her PhD in Computer Science from Princeton University in 2024, where she was advised by Olga Russakovsky. Her dissertation, titled Operationalizing Responsible Machine Learning: From Equality Towards Equity, bridged social implications and technical work to operationalize more equitable formulations of fairness. She holds a BS in Electrical Engineering and Computer Science from UC Berkeley and was a Postdoctoral Fellow at Stanford University's Institute for Human-Centered AI (HAI) and Regulation, Evaluation, and Governance Lab (RegLab) from 2024 to 2025. Her publications appear in top venues including Nature Machine Intelligence, PNAS, ACL, ICML, FAccT, ICCV, AAAI, and AIES. She received the Best Paper Award at ACL 2025 for her work on measuring desired group discrimination in large language models, and an Oral presentation at ICCV 2023. She has been recognized with the NSF Graduate Research Fellowship, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship. In January 2026, she became a Non-Resident Fellow at the Center for Democracy and Technology for a two-year term. Her work has been featured in MIT Technology Review, Vice, Washington Post, New Scientist, and Tech Brew.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Mar 19, 2026, 6:22 PM UTC
- Created
- Mar 19, 2026, 6:22 PM UTC