Loading…
Asynchronous Curriculum Experience Replay: A Deep Reinforcement Learning Approach for UAV Autonomous Motion Control in Unknown Dynamic Environments
Unmanned aerial vehicles (UAVs) have been widely used in military warfare, and realizing safely autonomous motion control (AMC) in complex unknown environments is a challenge to face. In this paper, we formulate the AMC problem as a Markov decision process (MDP) and propose an advanced deep reinforc...
Saved in:
Published in: | IEEE transactions on vehicular technology 2023-11, Vol.72 (11), p.1-16 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Unmanned aerial vehicles (UAVs) have been widely used in military warfare, and realizing safely autonomous motion control (AMC) in complex unknown environments is a challenge to face. In this paper, we formulate the AMC problem as a Markov decision process (MDP) and propose an advanced deep reinforcement learning (DRL) method that allows UAVs to execute complex tasks in different environments. Aiming to overcome the limitations of the prioritized experience replay (PER), the proposed asynchronous curriculum experience replay (ACER) uses multithreads to asynchronously update the priorities and assigns the true priorities to increase the diversity of experiences. It also applies a temporary pool to enhance learning from new experiences and changes the fashion of experience pool to first-in-useless-out (FIUO) to make better use of old experiences. In addition, combined with curriculum learning (CL), a more reasonable training paradigm is designed for ACER to train UAV agents smoothly. By training in a large-scale dynamic environment constructed based on the parameters of a real UAV, ACER improves the convergence speed by 24.66% and the convergence result by 5.59% compared to the twin delayed deep deterministic policy gradient (TD3) algorithm. The testing experiments carried out in environments with different complexities further demonstrate the strong robustness and generalization ability of the ACER agents. |
---|---|
ISSN: | 0018-9545 1939-9359 |
DOI: | 10.1109/TVT.2023.3285595 |