Loading…

Multi-Scale Fully Convolutional Network-Based Semantic Segmentation for Mobile Robot Navigation

In computer vision and mobile robotics, autonomous navigation is crucial. It enables the robot to navigate its environment, which consists primarily of obstacles and moving objects. Robot navigation employing impediment detections, such as walls and pillars, is not only essential but also challengin...

Full description

Saved in:
Bibliographic Details
Published in:Electronics (Basel) 2023-02, Vol.12 (3), p.533
Main Authors: Dang, Thai-Viet, Bui, Ngoc-Tam
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c361t-aca197f9bbebea520612b2590ddabc169eaac5d837e70025fe1c9c34c527f6df3
cites cdi_FETCH-LOGICAL-c361t-aca197f9bbebea520612b2590ddabc169eaac5d837e70025fe1c9c34c527f6df3
container_end_page
container_issue 3
container_start_page 533
container_title Electronics (Basel)
container_volume 12
creator Dang, Thai-Viet
Bui, Ngoc-Tam
description In computer vision and mobile robotics, autonomous navigation is crucial. It enables the robot to navigate its environment, which consists primarily of obstacles and moving objects. Robot navigation employing impediment detections, such as walls and pillars, is not only essential but also challenging due to real-world complications. This study provides a real-time solution to the problem of obtaining hallway scenes from an exclusive image. The authors predict a dense scene using a multi-scale fully convolutional network (FCN). The output is an image with pixel-by-pixel predictions that can be used for various navigation strategies. In addition, a method for comparing the computational cost and precision of various FCN architectures using VGG-16 is introduced. The binary semantic segmentation and optimal obstacle avoidance navigation of autonomous mobile robots are two areas in which our method outperforms the methods of competing works. The authors successfully apply perspective correction to the segmented image in order to construct the frontal view of the general area, which identifies the available moving area. The optimal obstacle avoidance strategy is comprised primarily of collision-free path planning, reasonable processing time, and smooth steering with low steering angle changes.
doi_str_mv 10.3390/electronics12030533
format article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2774855969</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A743140489</galeid><sourcerecordid>A743140489</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-aca197f9bbebea520612b2590ddabc169eaac5d837e70025fe1c9c34c527f6df3</originalsourceid><addsrcrecordid>eNptUMtOwzAQjBBIVNAv4BKJc4ofcRwfS0UBqS0ShbPlOOvKJYmL7RT170kpBw7sZUfamVnNJMkNRhNKBbqDBnT0rrM6YIIoYpSeJSOCuMgEEeT8D75MxiFs0TAC05KiUSKXfRNtttaqgXTeN80hnblu75o-WtepJl1B_HL-I7tXAep0Da3qotUD2LTQRXVkpcb5dOkqO1i8usrFdKX2dvNzu04ujGoCjH_3VfI-f3ibPWWLl8fn2XSRaVrgmCmtsOBGVBVUoBhBBSYVYQLVtao0LgQopVldUg4cIcIMYC00zTUj3BS1oVfJ7cl3591nDyHKrev9ECBIwnleMiYKMbAmJ9ZmiCttZ1z0w2utamitdh2YIYOc8pziHOXlUUBPAu1dCB6M3HnbKn-QGMlj-_Kf9uk3Yot8hA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2774855969</pqid></control><display><type>article</type><title>Multi-Scale Fully Convolutional Network-Based Semantic Segmentation for Mobile Robot Navigation</title><source>Publicly Available Content (ProQuest)</source><creator>Dang, Thai-Viet ; Bui, Ngoc-Tam</creator><creatorcontrib>Dang, Thai-Viet ; Bui, Ngoc-Tam</creatorcontrib><description>In computer vision and mobile robotics, autonomous navigation is crucial. It enables the robot to navigate its environment, which consists primarily of obstacles and moving objects. Robot navigation employing impediment detections, such as walls and pillars, is not only essential but also challenging due to real-world complications. This study provides a real-time solution to the problem of obtaining hallway scenes from an exclusive image. The authors predict a dense scene using a multi-scale fully convolutional network (FCN). The output is an image with pixel-by-pixel predictions that can be used for various navigation strategies. In addition, a method for comparing the computational cost and precision of various FCN architectures using VGG-16 is introduced. The binary semantic segmentation and optimal obstacle avoidance navigation of autonomous mobile robots are two areas in which our method outperforms the methods of competing works. The authors successfully apply perspective correction to the segmented image in order to construct the frontal view of the general area, which identifies the available moving area. The optimal obstacle avoidance strategy is comprised primarily of collision-free path planning, reasonable processing time, and smooth steering with low steering angle changes.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics12030533</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Algorithms ; Autonomous navigation ; Cameras ; Classification ; Collision avoidance ; Computer networks ; Computer vision ; Control systems ; Datasets ; Deep learning ; Halls ; Image processing ; Image segmentation ; Machine vision ; Mobile computing ; Mobile robots ; Motion ; Neural networks ; Object recognition ; Obstacle avoidance ; Path planning ; Pixels ; Robot dynamics ; Robotics ; Robots ; Semantic segmentation ; Semantics ; Sensors ; Steering</subject><ispartof>Electronics (Basel), 2023-02, Vol.12 (3), p.533</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-aca197f9bbebea520612b2590ddabc169eaac5d837e70025fe1c9c34c527f6df3</citedby><cites>FETCH-LOGICAL-c361t-aca197f9bbebea520612b2590ddabc169eaac5d837e70025fe1c9c34c527f6df3</cites><orcidid>0000-0003-0437-6104 ; 0000-0002-1496-2492</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2774855969/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2774855969?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,25732,27903,27904,36991,44569,74872</link.rule.ids></links><search><creatorcontrib>Dang, Thai-Viet</creatorcontrib><creatorcontrib>Bui, Ngoc-Tam</creatorcontrib><title>Multi-Scale Fully Convolutional Network-Based Semantic Segmentation for Mobile Robot Navigation</title><title>Electronics (Basel)</title><description>In computer vision and mobile robotics, autonomous navigation is crucial. It enables the robot to navigate its environment, which consists primarily of obstacles and moving objects. Robot navigation employing impediment detections, such as walls and pillars, is not only essential but also challenging due to real-world complications. This study provides a real-time solution to the problem of obtaining hallway scenes from an exclusive image. The authors predict a dense scene using a multi-scale fully convolutional network (FCN). The output is an image with pixel-by-pixel predictions that can be used for various navigation strategies. In addition, a method for comparing the computational cost and precision of various FCN architectures using VGG-16 is introduced. The binary semantic segmentation and optimal obstacle avoidance navigation of autonomous mobile robots are two areas in which our method outperforms the methods of competing works. The authors successfully apply perspective correction to the segmented image in order to construct the frontal view of the general area, which identifies the available moving area. The optimal obstacle avoidance strategy is comprised primarily of collision-free path planning, reasonable processing time, and smooth steering with low steering angle changes.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Autonomous navigation</subject><subject>Cameras</subject><subject>Classification</subject><subject>Collision avoidance</subject><subject>Computer networks</subject><subject>Computer vision</subject><subject>Control systems</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Halls</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Machine vision</subject><subject>Mobile computing</subject><subject>Mobile robots</subject><subject>Motion</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Obstacle avoidance</subject><subject>Path planning</subject><subject>Pixels</subject><subject>Robot dynamics</subject><subject>Robotics</subject><subject>Robots</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Sensors</subject><subject>Steering</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNptUMtOwzAQjBBIVNAv4BKJc4ofcRwfS0UBqS0ShbPlOOvKJYmL7RT170kpBw7sZUfamVnNJMkNRhNKBbqDBnT0rrM6YIIoYpSeJSOCuMgEEeT8D75MxiFs0TAC05KiUSKXfRNtttaqgXTeN80hnblu75o-WtepJl1B_HL-I7tXAep0Da3qotUD2LTQRXVkpcb5dOkqO1i8usrFdKX2dvNzu04ujGoCjH_3VfI-f3ibPWWLl8fn2XSRaVrgmCmtsOBGVBVUoBhBBSYVYQLVtao0LgQopVldUg4cIcIMYC00zTUj3BS1oVfJ7cl3591nDyHKrev9ECBIwnleMiYKMbAmJ9ZmiCttZ1z0w2utamitdh2YIYOc8pziHOXlUUBPAu1dCB6M3HnbKn-QGMlj-_Kf9uk3Yot8hA</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Dang, Thai-Viet</creator><creator>Bui, Ngoc-Tam</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0003-0437-6104</orcidid><orcidid>https://orcid.org/0000-0002-1496-2492</orcidid></search><sort><creationdate>20230201</creationdate><title>Multi-Scale Fully Convolutional Network-Based Semantic Segmentation for Mobile Robot Navigation</title><author>Dang, Thai-Viet ; Bui, Ngoc-Tam</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-aca197f9bbebea520612b2590ddabc169eaac5d837e70025fe1c9c34c527f6df3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Autonomous navigation</topic><topic>Cameras</topic><topic>Classification</topic><topic>Collision avoidance</topic><topic>Computer networks</topic><topic>Computer vision</topic><topic>Control systems</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Halls</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Machine vision</topic><topic>Mobile computing</topic><topic>Mobile robots</topic><topic>Motion</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Obstacle avoidance</topic><topic>Path planning</topic><topic>Pixels</topic><topic>Robot dynamics</topic><topic>Robotics</topic><topic>Robots</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Sensors</topic><topic>Steering</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dang, Thai-Viet</creatorcontrib><creatorcontrib>Bui, Ngoc-Tam</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dang, Thai-Viet</au><au>Bui, Ngoc-Tam</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Scale Fully Convolutional Network-Based Semantic Segmentation for Mobile Robot Navigation</atitle><jtitle>Electronics (Basel)</jtitle><date>2023-02-01</date><risdate>2023</risdate><volume>12</volume><issue>3</issue><spage>533</spage><pages>533-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>In computer vision and mobile robotics, autonomous navigation is crucial. It enables the robot to navigate its environment, which consists primarily of obstacles and moving objects. Robot navigation employing impediment detections, such as walls and pillars, is not only essential but also challenging due to real-world complications. This study provides a real-time solution to the problem of obtaining hallway scenes from an exclusive image. The authors predict a dense scene using a multi-scale fully convolutional network (FCN). The output is an image with pixel-by-pixel predictions that can be used for various navigation strategies. In addition, a method for comparing the computational cost and precision of various FCN architectures using VGG-16 is introduced. The binary semantic segmentation and optimal obstacle avoidance navigation of autonomous mobile robots are two areas in which our method outperforms the methods of competing works. The authors successfully apply perspective correction to the segmented image in order to construct the frontal view of the general area, which identifies the available moving area. The optimal obstacle avoidance strategy is comprised primarily of collision-free path planning, reasonable processing time, and smooth steering with low steering angle changes.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics12030533</doi><orcidid>https://orcid.org/0000-0003-0437-6104</orcidid><orcidid>https://orcid.org/0000-0002-1496-2492</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2023-02, Vol.12 (3), p.533
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_2774855969
source Publicly Available Content (ProQuest)
subjects Accuracy
Algorithms
Autonomous navigation
Cameras
Classification
Collision avoidance
Computer networks
Computer vision
Control systems
Datasets
Deep learning
Halls
Image processing
Image segmentation
Machine vision
Mobile computing
Mobile robots
Motion
Neural networks
Object recognition
Obstacle avoidance
Path planning
Pixels
Robot dynamics
Robotics
Robots
Semantic segmentation
Semantics
Sensors
Steering
title Multi-Scale Fully Convolutional Network-Based Semantic Segmentation for Mobile Robot Navigation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T01%3A20%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Scale%20Fully%20Convolutional%20Network-Based%20Semantic%20Segmentation%20for%20Mobile%20Robot%20Navigation&rft.jtitle=Electronics%20(Basel)&rft.au=Dang,%20Thai-Viet&rft.date=2023-02-01&rft.volume=12&rft.issue=3&rft.spage=533&rft.pages=533-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics12030533&rft_dat=%3Cgale_proqu%3EA743140489%3C/gale_proqu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c361t-aca197f9bbebea520612b2590ddabc169eaac5d837e70025fe1c9c34c527f6df3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2774855969&rft_id=info:pmid/&rft_galeid=A743140489&rfr_iscdi=true