Loading…

Visual Navigation Based on Semantic Segmentation Using Only a Monocular Camera as an External Sensor

The most popular external sensor for robots capable of autonomous movement is 3D LiDAR. However, cameras are typically installed on robots that operate in environments where humans live their daily lives to obtain the same information that is presented to humans, even though autonomous movement itse...

Full description

Saved in:
Bibliographic Details
Published in:Journal of robotics and mechatronics 2020-12, Vol.32 (6), p.1137-1153
Main Authors: Miyamoto, Ryusuke, Adachi, Miho, Ishida, Hiroki, Watanabe, Takuto, Matsutani, Kouchi, Komatsuzaki, Hayato, Sakata, Shogo, Yokota, Raimu, Kobayashi, Shingo
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The most popular external sensor for robots capable of autonomous movement is 3D LiDAR. However, cameras are typically installed on robots that operate in environments where humans live their daily lives to obtain the same information that is presented to humans, even though autonomous movement itself can be performed using only 3D LiDAR. The number of studies on autonomous movement for robots using only visual sensors is relatively small, but this type of approach is effective at reducing the cost of sensing devices per robot. To reduce the number of external sensors required for autonomous movement, this paper proposes a novel visual navigation scheme using only a monocular camera as an external sensor. The key concept of the proposed scheme is to select a target point in an input image toward which a robot can move based on the results of semantic segmentation, where road following and obstacle avoidance are performed simultaneously. Additionally, a novel scheme called virtual LiDAR is proposed based on the results of semantic segmentation to estimate the orientation of a robot relative to the current path in a traversable area. Experiments conducted during the course of the Tsukuba Challenge 2019 demonstrated that a robot can operate in a real environment containing several obstacles, such as humans and other robots, if correct results of semantic segmentation are provided.
ISSN:0915-3942
1883-8049
DOI:10.20965/jrm.2020.p1137