Loading…
Dictionary Learning-Structured Reinforcement Learning With Adaptive-Sparsity Regularizer
Deep reinforcement learning (DRL) has been applied to satellite navigation and positioning applications. Its performance relies heavily on the function-approximation capability of deep neural networks. However, existing DRL models suffer from catastrophic interference, resulting in inaccurate functi...
Saved in:
Published in: | IEEE transactions on aerospace and electronic systems 2024-04, Vol.60 (2), p.1753-1769 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep reinforcement learning (DRL) has been applied to satellite navigation and positioning applications. Its performance relies heavily on the function-approximation capability of deep neural networks. However, existing DRL models suffer from catastrophic interference, resulting in inaccurate function approximation. The sparse-coding-based DRL is an effective method to mitigating this interference, but existing methods involve the following two challenging issues: first, the value function estimation network suffers from instability problems with gradient backpropagation, including gradient explosion and gradient vanishing, second, existing methods are limited to using hand-crafted sparse regularizers that produce only static sparsity, which may be difficult to apply in various dynamic reinforcement learning (RL) environments. In this article, we propose a novel dictionary learning (DL)-structured RL model with adaptive-sparsity regularizer (ASR) that alleviates the catastrophic interference and enables accurate value function approximation, thereby improving the RL performance. To alleviate the interference and avoid the instability problems in RL, a feedforward DL-structured RL model is constructed to predict the value function without the need for gradient backpropagation. To learn data-driven sparse representations with adaptive sparsity, we propose to use the learnable sparse regularizer ASR in the model, where the key hyperparameters of ASR can be trained to be adaptive to variable RL environments. To optimize the model efficiently, the model parameters are first pretrained in the pretraining stage, with only the value weights used for value function approximation needing to be fine-tuned for actual RL applications in the control training stage. Our comparative experiments in benchmark environments demonstrate that the proposed method can outperform existing state-of-the-art sparse-coding-based RL algorithms. In terms of accumulated rewards (used to measure the quality of the learned policy), the improvement was over 63% in Cart Pole environment and nearly 10% for Puddle World. Furthermore, the proposed algorithm can maintain its relatively high performance in the presence of noise up to 20 dB. |
---|---|
ISSN: | 0018-9251 1557-9603 |
DOI: | 10.1109/TAES.2023.3342794 |