Loading…
Real-time Visual-Based Localization for Mobile Robot Using Structured-View Deep Learning
This paper demonstrates a place recognition and localization method designed for automated guidance of mobile robots. Collecting and annotating sufficient images for a supervised deep learning model is often an exhausting work. Devising an effective visual detection scheme for a mobile robot locatio...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper demonstrates a place recognition and localization method designed for automated guidance of mobile robots. Collecting and annotating sufficient images for a supervised deep learning model is often an exhausting work. Devising an effective visual detection scheme for a mobile robot location detection job in a feature-barren environment such as the indoor corridors of buildings is also quite challenging. To address these issues, a supervised deep learning model for the spatial coordinate detection of a mobile robot is proposed here. Specifically, a novel technique is introduced involving structuring and collaging of the surrounding views obtained by the on-board cameras for the training data preparation. A system linking robot kinematics and image processing provides automatic data annotation, which significantly reduces the need for human work on data preparation. Experimental evidence showed that the precision and recall rates of the location coordinate detection are 0.91 and 0.85, respectively. Also, the detection appeared to be effective over a path width of 0.75 m, which is sufficient to cover the possible deviations from the target path. Furthermore, it took averagely 0.14 s for each visual detection performed by an ordinary PC on-board the mobile robot; thus, real-time navigation using the proposed method is achievable. |
---|---|
ISSN: | 2161-8089 |
DOI: | 10.1109/COASE.2019.8842974 |