Loading…
Adaptive energy-efficient reinforcement learning for AUV 3D motion planning in complex underwater environments
This paper addresses the problem of 3D motion planning for autonomous underwater vehicles (AUVs) in complex underwater environments where prior environmental information is unavailable. A policy-feature-based state-dependent-exploration soft actor-critic (PSDE-SAC) framework integrating prioritized...
Saved in:
Published in: | Ocean engineering 2024-11, Vol.312, p.119111, Article 119111 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper addresses the problem of 3D motion planning for autonomous underwater vehicles (AUVs) in complex underwater environments where prior environmental information is unavailable. A policy-feature-based state-dependent-exploration soft actor-critic (PSDE-SAC) framework integrating prioritized experience relay (PER) mechanism is developed for energy-efficient AUV underwater navigation. Specifically, a generalized exponential-based energy consumption model is firstly constructed to enable accurate calculation of energy consumption between any two points in a 3D underwater environment regardless of environmental disturbances. Then, an adaptive reward function with adjustable weights is designed to balance energy consumption and travel distance. Based on the well-designed reward function, the PSDE-SAC motion planning framework is constructed such that the frequently encountered challenges of erratic motion and restricted exploration in reinforcement learning are addressed. In addition, with the introduction of PER and policy features, the convergence and exploration abilities of the PSDE-SAC framework are significantly enhanced. Simulation results illustrate the superiority of the proposed method against other reinforcement learning algorithms in terms of energy consumption, convergence, and stability.
•Introducing a model-free approach for robust decision-making and motion planning.•Balancing multiple optimization objectives while minimizing energy consumption.•Integrating PER and PF techniques to achieve smoother navigation. |
---|---|
ISSN: | 0029-8018 |
DOI: | 10.1016/j.oceaneng.2024.119111 |