Loading…
Multi-Robot Path Planning Method Using Reinforcement Learning
This paper proposes a noble multi-robot path planning algorithm using Deep q learning combined with CNN (Convolution Neural Network) algorithm. In conventional path planning algorithms, robots need to search a comparatively wide area for navigation and move in a predesigned formation under a given e...
Saved in:
Published in: | Applied sciences 2019-08, Vol.9 (15), p.3057 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c361t-19ccfd1da258e45ac75a6851b5c43275aa675a12400dda42a7f9a151e32cb9543 |
---|---|
cites | cdi_FETCH-LOGICAL-c361t-19ccfd1da258e45ac75a6851b5c43275aa675a12400dda42a7f9a151e32cb9543 |
container_end_page | |
container_issue | 15 |
container_start_page | 3057 |
container_title | Applied sciences |
container_volume | 9 |
creator | Bae, Hyansu Kim, Gidong Kim, Jonguk Qian, Dianwei Lee, Sukgyu |
description | This paper proposes a noble multi-robot path planning algorithm using Deep q learning combined with CNN (Convolution Neural Network) algorithm. In conventional path planning algorithms, robots need to search a comparatively wide area for navigation and move in a predesigned formation under a given environment. Each robot in the multi-robot system is inherently required to navigate independently with collaborating with other robots for efficient performance. In addition, the robot collaboration scheme is highly depends on the conditions of each robot, such as its position and velocity. However, the conventional method does not actively cope with variable situations since each robot has difficulty to recognize the moving robot around it as an obstacle or a cooperative robot. To compensate for these shortcomings, we apply Deep q learning to strengthen the learning algorithm combined with CNN algorithm, which is needed to analyze the situation efficiently. CNN analyzes the exact situation using image information on its environment and the robot navigates based on the situation analyzed through Deep q learning. The simulation results using the proposed algorithm shows the flexible and efficient movement of the robots comparing with conventional methods under various environments. |
doi_str_mv | 10.3390/app9153057 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_c3cb6bf2150f4fbfb4b8964e6195b5ef</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_c3cb6bf2150f4fbfb4b8964e6195b5ef</doaj_id><sourcerecordid>2323136831</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-19ccfd1da258e45ac75a6851b5c43275aa675a12400dda42a7f9a151e32cb9543</originalsourceid><addsrcrecordid>eNpNUE1LAzEQDaJgqb34Cxa8CauZfO3m4EGKH4UWS7HnkGSTdst2s2bTg__erRV1Dm_mDY83w0PoGvAdpRLf666TwCnmxRkaEVyInDIozv_Nl2jS9zs8lARaAh6hh8WhSXW-CiakbKnTNls2um3rdpMtXNqGKlv3R7JydetDtG7v2pTNnY5HzRW68Lrp3eSnj9H6-el9-prP315m08d5bqmAlIO01ldQacJLx7i2Bdei5GC4ZZQMRIsBgDCMq0ozogsvNXBwlFgjOaNjNDv5VkHvVBfrvY6fKuhafS9C3CgdU20bpyy1RhhPgGPPvPGGmVIK5gRIbrjzg9fNyauL4ePg-qR24RDb4X1FKKFARTnAGN2eVDaGvo_O_14FrI5pq7-06Rc_wXCe</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2323136831</pqid></control><display><type>article</type><title>Multi-Robot Path Planning Method Using Reinforcement Learning</title><source>Publicly Available Content (ProQuest)</source><creator>Bae, Hyansu ; Kim, Gidong ; Kim, Jonguk ; Qian, Dianwei ; Lee, Sukgyu</creator><creatorcontrib>Bae, Hyansu ; Kim, Gidong ; Kim, Jonguk ; Qian, Dianwei ; Lee, Sukgyu</creatorcontrib><description>This paper proposes a noble multi-robot path planning algorithm using Deep q learning combined with CNN (Convolution Neural Network) algorithm. In conventional path planning algorithms, robots need to search a comparatively wide area for navigation and move in a predesigned formation under a given environment. Each robot in the multi-robot system is inherently required to navigate independently with collaborating with other robots for efficient performance. In addition, the robot collaboration scheme is highly depends on the conditions of each robot, such as its position and velocity. However, the conventional method does not actively cope with variable situations since each robot has difficulty to recognize the moving robot around it as an obstacle or a cooperative robot. To compensate for these shortcomings, we apply Deep q learning to strengthen the learning algorithm combined with CNN algorithm, which is needed to analyze the situation efficiently. CNN analyzes the exact situation using image information on its environment and the robot navigates based on the situation analyzed through Deep q learning. The simulation results using the proposed algorithm shows the flexible and efficient movement of the robots comparing with conventional methods under various environments.</description><identifier>ISSN: 2076-3417</identifier><identifier>EISSN: 2076-3417</identifier><identifier>DOI: 10.3390/app9153057</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Artificial intelligence ; Behavior ; Computer engineering ; Computer simulation ; Convolution Neural Network ; cooperation ; Deep q learning ; Image processing ; International conferences ; Learning algorithms ; Machine learning ; Medical treatment ; multi-robots ; Multiple robots ; Natural language processing ; Neural networks ; Path planning ; reinforcement learning ; Robots ; Searching ; Signal processing ; Speech recognition ; Voice recognition</subject><ispartof>Applied sciences, 2019-08, Vol.9 (15), p.3057</ispartof><rights>2019. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-19ccfd1da258e45ac75a6851b5c43275aa675a12400dda42a7f9a151e32cb9543</citedby><cites>FETCH-LOGICAL-c361t-19ccfd1da258e45ac75a6851b5c43275aa675a12400dda42a7f9a151e32cb9543</cites><orcidid>0000-0003-3633-3504 ; 0000-0001-5277-3273</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2323136831/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2323136831?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Bae, Hyansu</creatorcontrib><creatorcontrib>Kim, Gidong</creatorcontrib><creatorcontrib>Kim, Jonguk</creatorcontrib><creatorcontrib>Qian, Dianwei</creatorcontrib><creatorcontrib>Lee, Sukgyu</creatorcontrib><title>Multi-Robot Path Planning Method Using Reinforcement Learning</title><title>Applied sciences</title><description>This paper proposes a noble multi-robot path planning algorithm using Deep q learning combined with CNN (Convolution Neural Network) algorithm. In conventional path planning algorithms, robots need to search a comparatively wide area for navigation and move in a predesigned formation under a given environment. Each robot in the multi-robot system is inherently required to navigate independently with collaborating with other robots for efficient performance. In addition, the robot collaboration scheme is highly depends on the conditions of each robot, such as its position and velocity. However, the conventional method does not actively cope with variable situations since each robot has difficulty to recognize the moving robot around it as an obstacle or a cooperative robot. To compensate for these shortcomings, we apply Deep q learning to strengthen the learning algorithm combined with CNN algorithm, which is needed to analyze the situation efficiently. CNN analyzes the exact situation using image information on its environment and the robot navigates based on the situation analyzed through Deep q learning. The simulation results using the proposed algorithm shows the flexible and efficient movement of the robots comparing with conventional methods under various environments.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Behavior</subject><subject>Computer engineering</subject><subject>Computer simulation</subject><subject>Convolution Neural Network</subject><subject>cooperation</subject><subject>Deep q learning</subject><subject>Image processing</subject><subject>International conferences</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Medical treatment</subject><subject>multi-robots</subject><subject>Multiple robots</subject><subject>Natural language processing</subject><subject>Neural networks</subject><subject>Path planning</subject><subject>reinforcement learning</subject><subject>Robots</subject><subject>Searching</subject><subject>Signal processing</subject><subject>Speech recognition</subject><subject>Voice recognition</subject><issn>2076-3417</issn><issn>2076-3417</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNUE1LAzEQDaJgqb34Cxa8CauZfO3m4EGKH4UWS7HnkGSTdst2s2bTg__erRV1Dm_mDY83w0PoGvAdpRLf666TwCnmxRkaEVyInDIozv_Nl2jS9zs8lARaAh6hh8WhSXW-CiakbKnTNls2um3rdpMtXNqGKlv3R7JydetDtG7v2pTNnY5HzRW68Lrp3eSnj9H6-el9-prP315m08d5bqmAlIO01ldQacJLx7i2Bdei5GC4ZZQMRIsBgDCMq0ozogsvNXBwlFgjOaNjNDv5VkHvVBfrvY6fKuhafS9C3CgdU20bpyy1RhhPgGPPvPGGmVIK5gRIbrjzg9fNyauL4ePg-qR24RDb4X1FKKFARTnAGN2eVDaGvo_O_14FrI5pq7-06Rc_wXCe</recordid><startdate>20190801</startdate><enddate>20190801</enddate><creator>Bae, Hyansu</creator><creator>Kim, Gidong</creator><creator>Kim, Jonguk</creator><creator>Qian, Dianwei</creator><creator>Lee, Sukgyu</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-3633-3504</orcidid><orcidid>https://orcid.org/0000-0001-5277-3273</orcidid></search><sort><creationdate>20190801</creationdate><title>Multi-Robot Path Planning Method Using Reinforcement Learning</title><author>Bae, Hyansu ; Kim, Gidong ; Kim, Jonguk ; Qian, Dianwei ; Lee, Sukgyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-19ccfd1da258e45ac75a6851b5c43275aa675a12400dda42a7f9a151e32cb9543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Behavior</topic><topic>Computer engineering</topic><topic>Computer simulation</topic><topic>Convolution Neural Network</topic><topic>cooperation</topic><topic>Deep q learning</topic><topic>Image processing</topic><topic>International conferences</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Medical treatment</topic><topic>multi-robots</topic><topic>Multiple robots</topic><topic>Natural language processing</topic><topic>Neural networks</topic><topic>Path planning</topic><topic>reinforcement learning</topic><topic>Robots</topic><topic>Searching</topic><topic>Signal processing</topic><topic>Speech recognition</topic><topic>Voice recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bae, Hyansu</creatorcontrib><creatorcontrib>Kim, Gidong</creatorcontrib><creatorcontrib>Kim, Jonguk</creatorcontrib><creatorcontrib>Qian, Dianwei</creatorcontrib><creatorcontrib>Lee, Sukgyu</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Applied sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bae, Hyansu</au><au>Kim, Gidong</au><au>Kim, Jonguk</au><au>Qian, Dianwei</au><au>Lee, Sukgyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Robot Path Planning Method Using Reinforcement Learning</atitle><jtitle>Applied sciences</jtitle><date>2019-08-01</date><risdate>2019</risdate><volume>9</volume><issue>15</issue><spage>3057</spage><pages>3057-</pages><issn>2076-3417</issn><eissn>2076-3417</eissn><abstract>This paper proposes a noble multi-robot path planning algorithm using Deep q learning combined with CNN (Convolution Neural Network) algorithm. In conventional path planning algorithms, robots need to search a comparatively wide area for navigation and move in a predesigned formation under a given environment. Each robot in the multi-robot system is inherently required to navigate independently with collaborating with other robots for efficient performance. In addition, the robot collaboration scheme is highly depends on the conditions of each robot, such as its position and velocity. However, the conventional method does not actively cope with variable situations since each robot has difficulty to recognize the moving robot around it as an obstacle or a cooperative robot. To compensate for these shortcomings, we apply Deep q learning to strengthen the learning algorithm combined with CNN algorithm, which is needed to analyze the situation efficiently. CNN analyzes the exact situation using image information on its environment and the robot navigates based on the situation analyzed through Deep q learning. The simulation results using the proposed algorithm shows the flexible and efficient movement of the robots comparing with conventional methods under various environments.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/app9153057</doi><orcidid>https://orcid.org/0000-0003-3633-3504</orcidid><orcidid>https://orcid.org/0000-0001-5277-3273</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2076-3417 |
ispartof | Applied sciences, 2019-08, Vol.9 (15), p.3057 |
issn | 2076-3417 2076-3417 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_c3cb6bf2150f4fbfb4b8964e6195b5ef |
source | Publicly Available Content (ProQuest) |
subjects | Algorithms Artificial intelligence Behavior Computer engineering Computer simulation Convolution Neural Network cooperation Deep q learning Image processing International conferences Learning algorithms Machine learning Medical treatment multi-robots Multiple robots Natural language processing Neural networks Path planning reinforcement learning Robots Searching Signal processing Speech recognition Voice recognition |
title | Multi-Robot Path Planning Method Using Reinforcement Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T21%3A23%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Robot%20Path%20Planning%20Method%20Using%20Reinforcement%20Learning&rft.jtitle=Applied%20sciences&rft.au=Bae,%20Hyansu&rft.date=2019-08-01&rft.volume=9&rft.issue=15&rft.spage=3057&rft.pages=3057-&rft.issn=2076-3417&rft.eissn=2076-3417&rft_id=info:doi/10.3390/app9153057&rft_dat=%3Cproquest_doaj_%3E2323136831%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c361t-19ccfd1da258e45ac75a6851b5c43275aa675a12400dda42a7f9a151e32cb9543%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2323136831&rft_id=info:pmid/&rfr_iscdi=true |