The ARIA Lab in the Kahlert School of Computing at the University of Utah is led by Assistant Professor Daniel S. Brown. The lab's research spans reward and preference learning, human-in-the-loop machine learning, imitation learning, and AI safety, with applications in assistive and medical robotics, personal AI assistants, swarm robotics, and autonomous driving. The lab both develops theoretical foundations and deploys algorithms on robot hardware platforms, running user studies to understand human factors. Open Philanthropy has supported the lab with grants for AI alignment research and course development on human-AI alignment.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The ARIA Lab's theory of change centers on the alignment problem: AI systems that behave in accordance with human values will be safer and more beneficial. By developing robust methods for learning reward functions and preferences from human input — and formal methods to verify alignment — the lab aims to provide the technical foundations needed for AI systems to remain aligned with humans as they become more capable. Better reward learning and verification methods reduce the risk that AI systems pursue unintended objectives, and training researchers and students in this area grows the community capable of addressing alignment challenges.
Grants Received
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:57 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC