Founded in 1869, Purdue University is Indiana's land-grant research institution and a leading R1 university with particular strength in science, technology, engineering, and mathematics. Its Department of Computer Science hosts research on AI security and language model safety, including work funded by Open Philanthropy on detecting adversarial attacks and deceptive content in large language models. Purdue also has a student-run AI Safety group (AI Safety Purdue) that runs introductory seminars and technical reading groups on alignment and AI risk.
Funding Details
- Annual Budget
- $3,159,000,000
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Purdue's AI safety-relevant research, particularly Prof. Zhang's work, focuses on technical solutions to adversarial robustness and deception detection in large language models. The causal chain is: develop interpretability and gradient-based techniques to detect adversarial inputs and deceptive outputs in LLMs, which reduces the risk of AI systems being manipulated or producing harmful deceptive content in high-stakes applications, thereby contributing to safer deployment of AI systems. At the broader institutional level, Purdue's AI safety education programs aim to train a new generation of researchers and practitioners who approach AI development with safety-conscious methods.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:53 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC