Loading…

Preparing various policies for interactive reinforcement learning for the SICE-ICASE International Joint Conference 2006 (SICE-ICCAS 2006)

We propose a new method of preparing various policies to distinguish main rewards from temporal rewards toward the interactive reinforcement learning method in which reward functions are given incrementally from an initial state to the goal state. Shaping is the theoretical framework of interactive...

Full description

Saved in:
Bibliographic Details
Main Authors: Satoh, K., Yamaguchi, T.
Format: Conference Proceeding
Language:eng ; jpn
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We propose a new method of preparing various policies to distinguish main rewards from temporal rewards toward the interactive reinforcement learning method in which reward functions are given incrementally from an initial state to the goal state. Shaping is the theoretical framework of interactive reinforcement learning. Most previous shaping researches assume shaping reward function that is monotonic distance function to the main goal and that is policy invariant. However, these assumptions will not be true on interactive reinforcement learning. To solve them, it is necessary to distinguish main rewards included in an expected optimal policy from temporal rewards only to guide its learning toward the optimal policy. This paper proposes the reward discrimination method for an interactive reinforcement learning agent. First, we introduce a concept of every-visit-optimality to define various policies. Then we present a method to search various policies on an identified MDP model. Experiments to evaluate the total search cost of acquiring various policies are performed between modified-PIA and our method. As the experimental results, our method holds the total search cost against increasing the number of rewards. This suggests that our method is better than previous reinforcement learning methods for interactive reinforcement learning in which many rewards are added incrementally
DOI:10.1109/SICE.2006.315139