Loading…
Goal-Oriented Navigation with Avoiding Obstacle based on Deep Reinforcement Learning in Continuous Action Space
Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. In real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safely avoid collisions. This work is an extension of...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 11 |
container_issue | |
container_start_page | 8 |
container_title | |
container_volume | |
creator | Hien, Pham Xuan Kim, Gon-Woo |
description | Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. In real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safely avoid collisions. This work is an extension of ongoing research on the navigation approach for a mobile robot. We show that through the proposed DRL, a goal-oriented collision avoidance model can be trained end-to-end without manual turning or supervision by a human operator. We suggest performing the obstacle avoidance algorithm of the mobile robot in both simulated environments and continuous action space of the real world. Finally, we measure and evaluate the obstacle avoidance capability through data collection of hit ratio metrics during robot execution. |
doi_str_mv | 10.23919/ICCAS52745.2021.9649898 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9649898</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9649898</ieee_id><sourcerecordid>9649898</sourcerecordid><originalsourceid>FETCH-LOGICAL-i133t-5054ef81d2752889585a8b219ca693441c6ef4f3e8a482738dd52808dcbea0763</originalsourceid><addsrcrecordid>eNotkE1OwzAUhA0SEqX0BGx8gRT_Js_LKECpFBGJwrpynJdilNpVkhZxewJ0NYv5ZqQZQihnSyENN_frosg3WmRKLwUTfGlSZcDABbkBY6TgWnBzSWYiVSKRhvFrshiGT8aYFEyxFGYkrqLtkqr3GEZs6Is9-Z0dfQz0y48fND9F3_iwo1U9jNZ1SGs7TNzkPyAe6Cv60Mbe4X7K0xJtH35pH2gRw-jDMR4Hmru_ws3BOrwlV63tBlycdU7enx7fiuekrFbrIi8Tz6UcE820whZ4IzItAIwGbaGe1jibGqkUdym2qpUIVoHIJDTNxDFoXI2WZamck7v_Xo-I20Pv97b_3p7_kT_W_1t9</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Goal-Oriented Navigation with Avoiding Obstacle based on Deep Reinforcement Learning in Continuous Action Space</title><source>IEEE Xplore All Conference Series</source><creator>Hien, Pham Xuan ; Kim, Gon-Woo</creator><creatorcontrib>Hien, Pham Xuan ; Kim, Gon-Woo</creatorcontrib><description>Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. In real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safely avoid collisions. This work is an extension of ongoing research on the navigation approach for a mobile robot. We show that through the proposed DRL, a goal-oriented collision avoidance model can be trained end-to-end without manual turning or supervision by a human operator. We suggest performing the obstacle avoidance algorithm of the mobile robot in both simulated environments and continuous action space of the real world. Finally, we measure and evaluate the obstacle avoidance capability through data collection of hit ratio metrics during robot execution.</description><identifier>EISSN: 2642-3901</identifier><identifier>EISBN: 8993215219</identifier><identifier>EISBN: 9788993215212</identifier><identifier>DOI: 10.23919/ICCAS52745.2021.9649898</identifier><language>eng</language><publisher>ICROS</publisher><subject>Aerospace electronics ; deep reinforcement learning ; Measurement ; Navigation ; obstacle avoidance ; path planning ; Q-learning ; Reinforcement learning ; Robot sensing systems ; Sensors ; Turning</subject><ispartof>2021 21st International Conference on Control, Automation and Systems (ICCAS), 2021, p.8-11</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9649898$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9649898$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Hien, Pham Xuan</creatorcontrib><creatorcontrib>Kim, Gon-Woo</creatorcontrib><title>Goal-Oriented Navigation with Avoiding Obstacle based on Deep Reinforcement Learning in Continuous Action Space</title><title>2021 21st International Conference on Control, Automation and Systems (ICCAS)</title><addtitle>ICCAS</addtitle><description>Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. In real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safely avoid collisions. This work is an extension of ongoing research on the navigation approach for a mobile robot. We show that through the proposed DRL, a goal-oriented collision avoidance model can be trained end-to-end without manual turning or supervision by a human operator. We suggest performing the obstacle avoidance algorithm of the mobile robot in both simulated environments and continuous action space of the real world. Finally, we measure and evaluate the obstacle avoidance capability through data collection of hit ratio metrics during robot execution.</description><subject>Aerospace electronics</subject><subject>deep reinforcement learning</subject><subject>Measurement</subject><subject>Navigation</subject><subject>obstacle avoidance</subject><subject>path planning</subject><subject>Q-learning</subject><subject>Reinforcement learning</subject><subject>Robot sensing systems</subject><subject>Sensors</subject><subject>Turning</subject><issn>2642-3901</issn><isbn>8993215219</isbn><isbn>9788993215212</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkE1OwzAUhA0SEqX0BGx8gRT_Js_LKECpFBGJwrpynJdilNpVkhZxewJ0NYv5ZqQZQihnSyENN_frosg3WmRKLwUTfGlSZcDABbkBY6TgWnBzSWYiVSKRhvFrshiGT8aYFEyxFGYkrqLtkqr3GEZs6Is9-Z0dfQz0y48fND9F3_iwo1U9jNZ1SGs7TNzkPyAe6Cv60Mbe4X7K0xJtH35pH2gRw-jDMR4Hmru_ws3BOrwlV63tBlycdU7enx7fiuekrFbrIi8Tz6UcE820whZ4IzItAIwGbaGe1jibGqkUdym2qpUIVoHIJDTNxDFoXI2WZamck7v_Xo-I20Pv97b_3p7_kT_W_1t9</recordid><startdate>20211012</startdate><enddate>20211012</enddate><creator>Hien, Pham Xuan</creator><creator>Kim, Gon-Woo</creator><general>ICROS</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20211012</creationdate><title>Goal-Oriented Navigation with Avoiding Obstacle based on Deep Reinforcement Learning in Continuous Action Space</title><author>Hien, Pham Xuan ; Kim, Gon-Woo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i133t-5054ef81d2752889585a8b219ca693441c6ef4f3e8a482738dd52808dcbea0763</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Aerospace electronics</topic><topic>deep reinforcement learning</topic><topic>Measurement</topic><topic>Navigation</topic><topic>obstacle avoidance</topic><topic>path planning</topic><topic>Q-learning</topic><topic>Reinforcement learning</topic><topic>Robot sensing systems</topic><topic>Sensors</topic><topic>Turning</topic><toplevel>online_resources</toplevel><creatorcontrib>Hien, Pham Xuan</creatorcontrib><creatorcontrib>Kim, Gon-Woo</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hien, Pham Xuan</au><au>Kim, Gon-Woo</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Goal-Oriented Navigation with Avoiding Obstacle based on Deep Reinforcement Learning in Continuous Action Space</atitle><btitle>2021 21st International Conference on Control, Automation and Systems (ICCAS)</btitle><stitle>ICCAS</stitle><date>2021-10-12</date><risdate>2021</risdate><spage>8</spage><epage>11</epage><pages>8-11</pages><eissn>2642-3901</eissn><eisbn>8993215219</eisbn><eisbn>9788993215212</eisbn><abstract>Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. In real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safely avoid collisions. This work is an extension of ongoing research on the navigation approach for a mobile robot. We show that through the proposed DRL, a goal-oriented collision avoidance model can be trained end-to-end without manual turning or supervision by a human operator. We suggest performing the obstacle avoidance algorithm of the mobile robot in both simulated environments and continuous action space of the real world. Finally, we measure and evaluate the obstacle avoidance capability through data collection of hit ratio metrics during robot execution.</abstract><pub>ICROS</pub><doi>10.23919/ICCAS52745.2021.9649898</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2642-3901 |
ispartof | 2021 21st International Conference on Control, Automation and Systems (ICCAS), 2021, p.8-11 |
issn | 2642-3901 |
language | eng |
recordid | cdi_ieee_primary_9649898 |
source | IEEE Xplore All Conference Series |
subjects | Aerospace electronics deep reinforcement learning Measurement Navigation obstacle avoidance path planning Q-learning Reinforcement learning Robot sensing systems Sensors Turning |
title | Goal-Oriented Navigation with Avoiding Obstacle based on Deep Reinforcement Learning in Continuous Action Space |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T18%3A20%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Goal-Oriented%20Navigation%20with%20Avoiding%20Obstacle%20based%20on%20Deep%20Reinforcement%20Learning%20in%20Continuous%20Action%20Space&rft.btitle=2021%2021st%20International%20Conference%20on%20Control,%20Automation%20and%20Systems%20(ICCAS)&rft.au=Hien,%20Pham%20Xuan&rft.date=2021-10-12&rft.spage=8&rft.epage=11&rft.pages=8-11&rft.eissn=2642-3901&rft_id=info:doi/10.23919/ICCAS52745.2021.9649898&rft.eisbn=8993215219&rft.eisbn_list=9788993215212&rft_dat=%3Cieee_CHZPO%3E9649898%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i133t-5054ef81d2752889585a8b219ca693441c6ef4f3e8a482738dd52808dcbea0763%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9649898&rfr_iscdi=true |