Loading…

Discriminative Map Retrieval Using View-Dependent Map Descriptor

Map retrieval, the problem of similarity search over a large collection of 2D pointset maps previously built by mobile robots, is crucial for autonomous navigation in indoor and outdoor environments. Bag-of-words (BoW) methods constitute a popular approach to map retrieval; however, these methods ha...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2015-09
Main Authors: Liu, Enfu, Tanaka, Kanji
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Map retrieval, the problem of similarity search over a large collection of 2D pointset maps previously built by mobile robots, is crucial for autonomous navigation in indoor and outdoor environments. Bag-of-words (BoW) methods constitute a popular approach to map retrieval; however, these methods have extremely limited descriptive ability because they ignore the spatial layout information of the local features. The main contribution of this paper is an extension of the bag-of-words map retrieval method to enable the use of spatial information from local features. Our strategy is to explicitly model a unique viewpoint of an input local map; the pose of the local feature is defined with respect to this unique viewpoint, and can be viewed as an additional invariant feature for discriminative map retrieval. Specifically, we wish to determine a unique viewpoint that is invariant to moving objects, clutter, occlusions, and actual viewpoints. Hence, we perform scene parsing to analyze the scene structure, and consider the "center" of the scene structure to be the unique viewpoint. Our scene parsing is based on a Manhattan world grammar that imposes a quasi-Manhattan world constraint to enable the robust detection of a scene structure that is invariant to clutter and moving objects. Experimental results using the publicly available radish dataset validate the efficacy of the proposed approach.
ISSN:2331-8422