The University of Texas at Austin is one of the largest public research universities in the United States, spending $1.37 billion on research in FY2025. Its AI safety-relevant programs include the AI+Human Objectives Initiative (AHOI), an interdisciplinary community of 13 faculty focused on ensuring advanced AI is aligned with human goals and values, and a research group led by Scott Aaronson exploring the intersection of AI safety and computational complexity theory. Additional related programs include the Institute for Foundations of Machine Learning (IFML), the Good Systems ethical AI grand challenge, and the broader Texas AI initiative.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
UT Austin's AI safety programs pursue impact through two complementary paths. AHOI bets that a rigorous, interdisciplinary academic community spanning CS, philosophy, linguistics, and social science can produce foundational research on alignment and safety that informs both technical AI development and policy. Aaronson's group bets that theoretical computer science tools - complexity theory, cryptography, formal methods - can provide rigorous foundations for alignment problems such as interpretability and robustness that currently lack them. Together, these groups aim to build the research base and train the next generation of researchers who will reduce risks from advanced AI systems.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Long-Term Future Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC