The 1st Workshop on Goal Specifications for Reinforcement Learning (GoalsRL) addressed a fundamental challenge in RL: as environments and tasks grow more complex, engineering reward functions that elicit desired behavior becomes increasingly difficult and is susceptible to reward hacking. The workshop explored alternatives including inverse reinforcement learning, imitation learning, hierarchical RL, and goal specification through visual targets or natural language. Organized by researchers from Georgia Tech and Brown University, it was held in 2018 at three major AI/ML venues: ICML in Stockholm, IJCAI, and AAMAS.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $7,500
- Fiscal Sponsor
- -
Theory of Change
By convening researchers focused on alternatives to scalar reward specification, GoalsRL aimed to accelerate progress on making RL agents safer and more aligned with human intent. Poorly specified rewards lead to reward hacking and unintended behavior — problems directly relevant to AI safety. Better goal specification methods reduce the gap between what designers want and what agents optimize for, contributing to more reliably aligned AI systems.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:52 PM UTC
- Created
- Mar 20, 2026, 2:35 AM UTC