The University of Michigan is one of the United States' leading public research universities, founded in 1817 and based in Ann Arbor. Within the AI safety ecosystem, UMich is notable for hosting several faculty researchers working on problems directly relevant to safe and trustworthy AI. Prof. Lu Wang (LAUNCH Lab, CSE) conducts research on alignment faking and out-of-context learning in AI systems, supported by Open Philanthropy. Prof. Samet Oymak (SOTA Lab, EECS) works on scalable oversight of language model agents. The university also hosts the Michigan AI Safety Initiative (MAISI), a student organization running AI safety education programs and community-building activities.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
University of Michigan contributes to AI safety primarily through academic research and talent development. Faculty researchers like Lu Wang (alignment faking, trustworthy LLMs) and Samet Oymak (scalable oversight) produce technical research that advances understanding of how to detect and prevent unsafe AI behavior. By training graduate students and postdocs in these areas, UMich builds the pipeline of researchers who will work on AI safety at labs, nonprofits, and other universities. The student-led MAISI organization multiplies this impact by introducing undergraduates to AI safety concepts and career paths.
Grants Received
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:54 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC