Loading…

Active Learning of Reward Dynamics from Hierarchical Queries

Enabling robots to act according to human preferences across diverse environments is a crucial task, extensively studied by both roboticists and machine learning researchers. To achieve it, human preferences are often encoded by a reward function which the robot optimizes for. This reward function i...

Full description

Saved in:
Bibliographic Details
Main Authors: Basu, Chandrayee, Biyik, Erdem, He, Zhixun, Singhal, Mukesh, Sadigh, Dorsa
Format: Conference Proceeding
Language:English
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Enabling robots to act according to human preferences across diverse environments is a crucial task, extensively studied by both roboticists and machine learning researchers. To achieve it, human preferences are often encoded by a reward function which the robot optimizes for. This reward function is generally static in the sense that it does not vary with time or the interactions. Unfortunately, such static reward functions do not always adequately capture human preferences, especially, in non-stationary environments: Human preferences change in response to the emergent behaviors of the other agents in the environment. In this work, we propose learning reward dynamics that can adapt in non-stationary environments with several interacting agents. We define reward dynamics as a tuple of reward functions, one for each mode of interaction, and mode-utility functions governing transitions between the modes. Reward dynamics thereby encodes not only different human preferences but also how the preferences change. Our contribution is in the way we adapt preference-based learning into a hierarchical approach that aims at learning not only reward functions but also how they evolve based on interactions. We derive a probabilistic observation model of how people will respond to the hierarchical queries. Our algorithm leverages this model to actively select hierarchical queries that will maximize the volume removed from a continuous hypothesis space of reward dynamics. We empirically demonstrate reward dynamics can match human preferences accurately.
ISSN:2153-0866
DOI:10.1109/IROS40897.2019.8968522