Loading…
Stochastic field model for autonomous robot learning
Through reinforcement learning, an autonomous robot creates an optimal policy which maps state space to action space. The mapping is obtained by trial and error through the interaction with a given environment. The mapping is represented as an action-value function. The environment accords an inform...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Through reinforcement learning, an autonomous robot creates an optimal policy which maps state space to action space. The mapping is obtained by trial and error through the interaction with a given environment. The mapping is represented as an action-value function. The environment accords an information in the form of scalar feedback known as a reinforcement signal. As a result of reinforcement learning, an action has the high action-value in each state. The optimal policy is equivalent to choosing an action which has the highest action-value in each state. Typically, even if an autonomous robot has continuous sensor values, the summation of discrete values is used as an action-value function to reduce learning time. However, the reinforcement learning algorithms including Q-learning suffer from errors due to state space sampling. To overcome the above, we propose an EQ-learning (extended Q-learning) based on a SFM (stochastic field model). EQ-learning is designed in order to accommodate continuous state space directly and to improve its generalization capability. Through EQ-learning, an action-value function is represented by the summation of weighted base functions, and an autonomous robot adjusts weights of base functions at learning stage. Other parameters (center coordinates, variance and so on) are adjusted at the unification stage where two similar functions are unified to a simpler function. |
---|---|
ISSN: | 1062-922X 2577-1655 |
DOI: | 10.1109/ICSMC.1999.825356 |