Loading…
Multi-Modal Place Recognition via Vectorized HD Maps and Images Fusion for Autonomous Driving
The deployment of autonomous vehicles and mobile robots requires light, fast, and robust visual place recognition strategies. While visual place recognition has proven effective in favorable conditions, its performance quickly drops when faced with abundant visual cues, such as repeating image patte...
Saved in:
Published in: | IEEE robotics and automation letters 2024-05, Vol.9 (5), p.4710-4717 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The deployment of autonomous vehicles and mobile robots requires light, fast, and robust visual place recognition strategies. While visual place recognition has proven effective in favorable conditions, its performance quickly drops when faced with abundant visual cues, such as repeating image patterns commonly found in driving environments. To address this problem, a new representation that incorporates geometric cues with structural semantics can also be utilized to find the position of an agent to distribute the reliance on visual cues. In this letter, we present the first multi-modal place recognition for autonomous driving that utilizes both images and vectorized HD maps. The vectorized HD maps have the advantage of being lightweight and providing geometric cues with structural semantics, making them particularly well-suited for place recognition. To accomplish this, we employ a hierarchical graph neural network to extract a compact and robust descriptor from a local vectorized map that can be captured from surrounding images. Although HD maps provide concise geometric cues with structural semantics, they sometimes do not provide sufficient features for place recognition, contrary to images. To cope with this limitation, we propose to adaptively fuse both descriptors extracted from maps and images in order to combine the best complementary aspects of each modality via a transformer-based solution. Extensive experiments on large-scale driving datasets, NuScenes and Argoverse2, demonstrate that our multi-modal visual localization outperforms visual-only approaches. Specifically, ours improves the baseline up to 6.48%p in Recall@1 with less than 10 ms additional computation. |
---|---|
ISSN: | 2377-3766 2377-3766 |
DOI: | 10.1109/LRA.2024.3374193 |