Michigan State University
Michigan State University's CSE department houses the OPTimization and Trustworthy Machine Learning (OPTML) group, led by Red Cedar Distinguished Professor Sijia Liu. The group's AI safety work focuses on machine unlearning for large language models, adversarial robustness, backdoor defense, and reasoning model safety. The department has received direct AI safety funding from Open Philanthropy and the Center for AI Safety, and is one of the largest academic units at MSU with approximately 45 faculty and over $8 million in annual research expenditures.
Funding Details
- Annual Budget
- $8,000,000
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
By developing rigorous techniques for machine unlearning, adversarial robustness, and backdoor defense in large foundation models, the OPTML group aims to make deployed AI systems safer and more controllable. The core bet is that technical safety research — particularly methods to reliably remove dangerous or incorrect knowledge from models and resist adversarial manipulation — reduces the risk that powerful AI systems cause harm, and that publishing this research widely diffuses safety improvements across the AI development ecosystem.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:55 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC