Loading…
Q-learning with exploration driven by internal dynamics in chaotic neural network
This paper shows chaos-based reinforcement learning (RL) using a chaotic neural network (NN) functions not only with Actor-Critic, but also with Q-learning. In chaos-based RL that we have proposed, exploration is performed based on internal dynamics in a chaotic NN and the dynamics is expected to gr...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper shows chaos-based reinforcement learning (RL) using a chaotic neural network (NN) functions not only with Actor-Critic, but also with Q-learning. In chaos-based RL that we have proposed, exploration is performed based on internal dynamics in a chaotic NN and the dynamics is expected to grow rational through learning. Q-learning is a very popular RL method and widely used in several researches. We focused on whether Q-learning can be adopted to chaos-based RL. Then we demonstrated the agent can learn a goal task in a grid world environment with chaos-based RL using Q-learning. It was also shown that, as learning progresses, irregularity in the network outputs originated from the internal chaotic dynamics decreases and the agent can automatically switch from exploration mode to exploitation mode. Moreover, it was confirmed that the agent can adapt to changes in the environment and automatically resume exploration. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN48605.2020.9207114 |