Loading…

Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field

Enhancing the path planning capabilities of ships is crucial for ensuring navigation safety, saving time, and reducing energy consumption in complex maritime environments. Traditional methods, reliant on static algorithms and singular models, are frequently limited by the physical constraints of shi...

Full description

Saved in:
Bibliographic Details
Published in:Journal of marine science and engineering 2024-08, Vol.12 (8), p.1334
Main Authors: Xu, Weifeng, Zhu, Xiang, Gao, Xiaori, Li, Xiaoyong, Cao, Jianping, Ren, Xiaoli, Shao, Chengcheng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c254t-a64a6aba9c288b7bc67b092b1d58867ebd96bb3cb19f198a1e50bde19f3c6a4c3
container_end_page
container_issue 8
container_start_page 1334
container_title Journal of marine science and engineering
container_volume 12
creator Xu, Weifeng
Zhu, Xiang
Gao, Xiaori
Li, Xiaoyong
Cao, Jianping
Ren, Xiaoli
Shao, Chengcheng
description Enhancing the path planning capabilities of ships is crucial for ensuring navigation safety, saving time, and reducing energy consumption in complex maritime environments. Traditional methods, reliant on static algorithms and singular models, are frequently limited by the physical constraints of ships, such as turning radius, and struggle to adapt to the maritime environment’s variability and emergencies. The development of reinforcement learning has introduced new methods and perspectives to path planning by addressing complex environments, achieving multi-objective optimization, and enhancing autonomous learning and adaptability, significantly improving the performance and application scope. In this study, we introduce a two-stage path planning approach for large ships named MAPF–DQN, combining Manipulation-Compliant Artificial Potential Field (MAPF) with Deep Q-Network (DQN). In the first stage, we improve the reward function in DQN by integrating the artificial potential field method and use a time-varying greedy algorithm to search for paths. In the second stage, we use the nonlinear Nomoto model for path smoothing to enhance maneuverability. To validate the performance and effectiveness of the algorithm, we conducted extensive experiments using the model of “Yupeng” ship. Case studies and experimental results demonstrate that the MAPF–DQN algorithm can find paths that closely match the actual trajectory under normal environmental conditions and U-shaped obstacles. In summary, the MAPF–DQN algorithm not only enhances the efficiency of path planning for large ships, but also finds relatively safe and maneuverable routes, which are of great significance for maritime activities.
doi_str_mv 10.3390/jmse12081334
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_553b16215af74b93b7a423b4fcf81c26</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_553b16215af74b93b7a423b4fcf81c26</doaj_id><sourcerecordid>3098088929</sourcerecordid><originalsourceid>FETCH-LOGICAL-c254t-a64a6aba9c288b7bc67b092b1d58867ebd96bb3cb19f198a1e50bde19f3c6a4c3</originalsourceid><addsrcrecordid>eNp1kU1v1DAQhqMKJKrSW3-ApV4b8Efi2NzaLYVKCywFztbYmWy9zdqp7RXi9_BHyXYR6oW5zIfeed6RpqrOGH0jhKZvN9uMjFPFhGiOqmNOu65mgvEXz-pX1WnOGzqH4pJReVz9_gTBT7sRio-hXsTtNHoIhVym4gfvPIxkFQuGsq9uPI49gdCTa8SJfK0_Y_kZ08M7soS0RvLt3k-ZrKDck9UIIfiwJleQsScxHFbu0IchJofbGUmWCOlJtEf-3_F19XKAMePp33xS_bh5_33xsV5--XC7uFzWjrdNqUE2IMGCdlwp21knO0s1t6xvlZId2l5La4WzTA9MK2DYUtvj3AknoXHipLo9cPsIGzMlv4X0y0Tw5mkQ09rAfKQb0bStsExy1sLQNVYL20HDhW0GNyjmuJxZ5wfWlOLjDnMxm7hLYT7fCKoVVUpzPasuDiqXYs4Jh3-ujJr9V83zr4o_E6yXCA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3098088929</pqid></control><display><type>article</type><title>Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Xu, Weifeng ; Zhu, Xiang ; Gao, Xiaori ; Li, Xiaoyong ; Cao, Jianping ; Ren, Xiaoli ; Shao, Chengcheng</creator><creatorcontrib>Xu, Weifeng ; Zhu, Xiang ; Gao, Xiaori ; Li, Xiaoyong ; Cao, Jianping ; Ren, Xiaoli ; Shao, Chengcheng</creatorcontrib><description>Enhancing the path planning capabilities of ships is crucial for ensuring navigation safety, saving time, and reducing energy consumption in complex maritime environments. Traditional methods, reliant on static algorithms and singular models, are frequently limited by the physical constraints of ships, such as turning radius, and struggle to adapt to the maritime environment’s variability and emergencies. The development of reinforcement learning has introduced new methods and perspectives to path planning by addressing complex environments, achieving multi-objective optimization, and enhancing autonomous learning and adaptability, significantly improving the performance and application scope. In this study, we introduce a two-stage path planning approach for large ships named MAPF–DQN, combining Manipulation-Compliant Artificial Potential Field (MAPF) with Deep Q-Network (DQN). In the first stage, we improve the reward function in DQN by integrating the artificial potential field method and use a time-varying greedy algorithm to search for paths. In the second stage, we use the nonlinear Nomoto model for path smoothing to enhance maneuverability. To validate the performance and effectiveness of the algorithm, we conducted extensive experiments using the model of “Yupeng” ship. Case studies and experimental results demonstrate that the MAPF–DQN algorithm can find paths that closely match the actual trajectory under normal environmental conditions and U-shaped obstacles. In summary, the MAPF–DQN algorithm not only enhances the efficiency of path planning for large ships, but also finds relatively safe and maneuverable routes, which are of great significance for maritime activities.</description><identifier>ISSN: 2077-1312</identifier><identifier>EISSN: 2077-1312</identifier><identifier>DOI: 10.3390/jmse12081334</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Adaptability ; Algorithms ; artificial potential field ; Deep learning ; DQN ; Efficiency ; Emergency plans ; Energy conservation ; Energy consumption ; Environmental conditions ; Greedy algorithms ; large ships ; Learning ; Maneuverability ; Manoeuvrability ; Methods ; Multiple objective analysis ; Navigation ; Navigation safety ; Navigation systems ; Optimization techniques ; Path planning ; Potential fields ; Reinforcement ; safety ; Ships</subject><ispartof>Journal of marine science and engineering, 2024-08, Vol.12 (8), p.1334</ispartof><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c254t-a64a6aba9c288b7bc67b092b1d58867ebd96bb3cb19f198a1e50bde19f3c6a4c3</cites><orcidid>0009-0004-3736-4374 ; 0000-0003-2807-4944 ; 0000-0002-0497-5978</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3098088929/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3098088929?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Xu, Weifeng</creatorcontrib><creatorcontrib>Zhu, Xiang</creatorcontrib><creatorcontrib>Gao, Xiaori</creatorcontrib><creatorcontrib>Li, Xiaoyong</creatorcontrib><creatorcontrib>Cao, Jianping</creatorcontrib><creatorcontrib>Ren, Xiaoli</creatorcontrib><creatorcontrib>Shao, Chengcheng</creatorcontrib><title>Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field</title><title>Journal of marine science and engineering</title><description>Enhancing the path planning capabilities of ships is crucial for ensuring navigation safety, saving time, and reducing energy consumption in complex maritime environments. Traditional methods, reliant on static algorithms and singular models, are frequently limited by the physical constraints of ships, such as turning radius, and struggle to adapt to the maritime environment’s variability and emergencies. The development of reinforcement learning has introduced new methods and perspectives to path planning by addressing complex environments, achieving multi-objective optimization, and enhancing autonomous learning and adaptability, significantly improving the performance and application scope. In this study, we introduce a two-stage path planning approach for large ships named MAPF–DQN, combining Manipulation-Compliant Artificial Potential Field (MAPF) with Deep Q-Network (DQN). In the first stage, we improve the reward function in DQN by integrating the artificial potential field method and use a time-varying greedy algorithm to search for paths. In the second stage, we use the nonlinear Nomoto model for path smoothing to enhance maneuverability. To validate the performance and effectiveness of the algorithm, we conducted extensive experiments using the model of “Yupeng” ship. Case studies and experimental results demonstrate that the MAPF–DQN algorithm can find paths that closely match the actual trajectory under normal environmental conditions and U-shaped obstacles. In summary, the MAPF–DQN algorithm not only enhances the efficiency of path planning for large ships, but also finds relatively safe and maneuverable routes, which are of great significance for maritime activities.</description><subject>Adaptability</subject><subject>Algorithms</subject><subject>artificial potential field</subject><subject>Deep learning</subject><subject>DQN</subject><subject>Efficiency</subject><subject>Emergency plans</subject><subject>Energy conservation</subject><subject>Energy consumption</subject><subject>Environmental conditions</subject><subject>Greedy algorithms</subject><subject>large ships</subject><subject>Learning</subject><subject>Maneuverability</subject><subject>Manoeuvrability</subject><subject>Methods</subject><subject>Multiple objective analysis</subject><subject>Navigation</subject><subject>Navigation safety</subject><subject>Navigation systems</subject><subject>Optimization techniques</subject><subject>Path planning</subject><subject>Potential fields</subject><subject>Reinforcement</subject><subject>safety</subject><subject>Ships</subject><issn>2077-1312</issn><issn>2077-1312</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp1kU1v1DAQhqMKJKrSW3-ApV4b8Efi2NzaLYVKCywFztbYmWy9zdqp7RXi9_BHyXYR6oW5zIfeed6RpqrOGH0jhKZvN9uMjFPFhGiOqmNOu65mgvEXz-pX1WnOGzqH4pJReVz9_gTBT7sRio-hXsTtNHoIhVym4gfvPIxkFQuGsq9uPI49gdCTa8SJfK0_Y_kZ08M7soS0RvLt3k-ZrKDck9UIIfiwJleQsScxHFbu0IchJofbGUmWCOlJtEf-3_F19XKAMePp33xS_bh5_33xsV5--XC7uFzWjrdNqUE2IMGCdlwp21knO0s1t6xvlZId2l5La4WzTA9MK2DYUtvj3AknoXHipLo9cPsIGzMlv4X0y0Tw5mkQ09rAfKQb0bStsExy1sLQNVYL20HDhW0GNyjmuJxZ5wfWlOLjDnMxm7hLYT7fCKoVVUpzPasuDiqXYs4Jh3-ujJr9V83zr4o_E6yXCA</recordid><startdate>20240801</startdate><enddate>20240801</enddate><creator>Xu, Weifeng</creator><creator>Zhu, Xiang</creator><creator>Gao, Xiaori</creator><creator>Li, Xiaoyong</creator><creator>Cao, Jianping</creator><creator>Ren, Xiaoli</creator><creator>Shao, Chengcheng</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7ST</scope><scope>7TN</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>BKSAR</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F1W</scope><scope>GNUQQ</scope><scope>H96</scope><scope>HCIFZ</scope><scope>L.G</scope><scope>L6V</scope><scope>M7S</scope><scope>PATMY</scope><scope>PCBAR</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>SOI</scope><scope>DOA</scope><orcidid>https://orcid.org/0009-0004-3736-4374</orcidid><orcidid>https://orcid.org/0000-0003-2807-4944</orcidid><orcidid>https://orcid.org/0000-0002-0497-5978</orcidid></search><sort><creationdate>20240801</creationdate><title>Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field</title><author>Xu, Weifeng ; Zhu, Xiang ; Gao, Xiaori ; Li, Xiaoyong ; Cao, Jianping ; Ren, Xiaoli ; Shao, Chengcheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c254t-a64a6aba9c288b7bc67b092b1d58867ebd96bb3cb19f198a1e50bde19f3c6a4c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptability</topic><topic>Algorithms</topic><topic>artificial potential field</topic><topic>Deep learning</topic><topic>DQN</topic><topic>Efficiency</topic><topic>Emergency plans</topic><topic>Energy conservation</topic><topic>Energy consumption</topic><topic>Environmental conditions</topic><topic>Greedy algorithms</topic><topic>large ships</topic><topic>Learning</topic><topic>Maneuverability</topic><topic>Manoeuvrability</topic><topic>Methods</topic><topic>Multiple objective analysis</topic><topic>Navigation</topic><topic>Navigation safety</topic><topic>Navigation systems</topic><topic>Optimization techniques</topic><topic>Path planning</topic><topic>Potential fields</topic><topic>Reinforcement</topic><topic>safety</topic><topic>Ships</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xu, Weifeng</creatorcontrib><creatorcontrib>Zhu, Xiang</creatorcontrib><creatorcontrib>Gao, Xiaori</creatorcontrib><creatorcontrib>Li, Xiaoyong</creatorcontrib><creatorcontrib>Cao, Jianping</creatorcontrib><creatorcontrib>Ren, Xiaoli</creatorcontrib><creatorcontrib>Shao, Chengcheng</creatorcontrib><collection>CrossRef</collection><collection>Environment Abstracts</collection><collection>Oceanic Abstracts</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Agricultural &amp; Environmental Science Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Earth, Atmospheric &amp; Aquatic Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>ProQuest Central Student</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>SciTech Premium Collection</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Environmental Science Database</collection><collection>Earth, Atmospheric &amp; Aquatic Science Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>Environmental Science Collection</collection><collection>Environment Abstracts</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Journal of marine science and engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Weifeng</au><au>Zhu, Xiang</au><au>Gao, Xiaori</au><au>Li, Xiaoyong</au><au>Cao, Jianping</au><au>Ren, Xiaoli</au><au>Shao, Chengcheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field</atitle><jtitle>Journal of marine science and engineering</jtitle><date>2024-08-01</date><risdate>2024</risdate><volume>12</volume><issue>8</issue><spage>1334</spage><pages>1334-</pages><issn>2077-1312</issn><eissn>2077-1312</eissn><abstract>Enhancing the path planning capabilities of ships is crucial for ensuring navigation safety, saving time, and reducing energy consumption in complex maritime environments. Traditional methods, reliant on static algorithms and singular models, are frequently limited by the physical constraints of ships, such as turning radius, and struggle to adapt to the maritime environment’s variability and emergencies. The development of reinforcement learning has introduced new methods and perspectives to path planning by addressing complex environments, achieving multi-objective optimization, and enhancing autonomous learning and adaptability, significantly improving the performance and application scope. In this study, we introduce a two-stage path planning approach for large ships named MAPF–DQN, combining Manipulation-Compliant Artificial Potential Field (MAPF) with Deep Q-Network (DQN). In the first stage, we improve the reward function in DQN by integrating the artificial potential field method and use a time-varying greedy algorithm to search for paths. In the second stage, we use the nonlinear Nomoto model for path smoothing to enhance maneuverability. To validate the performance and effectiveness of the algorithm, we conducted extensive experiments using the model of “Yupeng” ship. Case studies and experimental results demonstrate that the MAPF–DQN algorithm can find paths that closely match the actual trajectory under normal environmental conditions and U-shaped obstacles. In summary, the MAPF–DQN algorithm not only enhances the efficiency of path planning for large ships, but also finds relatively safe and maneuverable routes, which are of great significance for maritime activities.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/jmse12081334</doi><orcidid>https://orcid.org/0009-0004-3736-4374</orcidid><orcidid>https://orcid.org/0000-0003-2807-4944</orcidid><orcidid>https://orcid.org/0000-0002-0497-5978</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2077-1312
ispartof Journal of marine science and engineering, 2024-08, Vol.12 (8), p.1334
issn 2077-1312
2077-1312
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_553b16215af74b93b7a423b4fcf81c26
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Adaptability
Algorithms
artificial potential field
Deep learning
DQN
Efficiency
Emergency plans
Energy conservation
Energy consumption
Environmental conditions
Greedy algorithms
large ships
Learning
Maneuverability
Manoeuvrability
Methods
Multiple objective analysis
Navigation
Navigation safety
Navigation systems
Optimization techniques
Path planning
Potential fields
Reinforcement
safety
Ships
title Manipulation-Compliant Artificial Potential Field and Deep Q-Network: Large Ships Path Planning Based on Deep Reinforcement Learning and Artificial Potential Field
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T23%3A03%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Manipulation-Compliant%20Artificial%20Potential%20Field%20and%20Deep%20Q-Network:%20Large%20Ships%20Path%20Planning%20Based%20on%20Deep%20Reinforcement%20Learning%20and%20Artificial%20Potential%20Field&rft.jtitle=Journal%20of%20marine%20science%20and%20engineering&rft.au=Xu,%20Weifeng&rft.date=2024-08-01&rft.volume=12&rft.issue=8&rft.spage=1334&rft.pages=1334-&rft.issn=2077-1312&rft.eissn=2077-1312&rft_id=info:doi/10.3390/jmse12081334&rft_dat=%3Cproquest_doaj_%3E3098088929%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c254t-a64a6aba9c288b7bc67b092b1d58867ebd96bb3cb19f198a1e50bde19f3c6a4c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3098088929&rft_id=info:pmid/&rfr_iscdi=true