Loading…
2D-3D Cross-Modality Network for End-to-End Localization with Probabilistic Supervision
Accurate localization ability is a crucial component for autonomous robots. Given existing LiDAR 3D points maps, it is cost-effective to localize the robot only with onboard camera compared to LiDAR. However, matching 2D visual information with 3D point cloud maps presents huge challenges due to dif...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Accurate localization ability is a crucial component for autonomous robots. Given existing LiDAR 3D points maps, it is cost-effective to localize the robot only with onboard camera compared to LiDAR. However, matching 2D visual information with 3D point cloud maps presents huge challenges due to different modalities, dimensions, noise and occlusion issues. To overcome it, we propose an end-to-end neural network-based solution, which determines the 6-DoF pose of the camera relative to an existing LiDAR map with centimeter accuracy. Given a query image, a pre-acquired point cloud and an initial pose, the cross-modality network will output a precise pose. By projecting the 3D point cloud onto the image plane, a depth image is acquired as seen from the initial pose. Subsequently, a cross-modality flow network establishes the correspondences of 2D pixels and projected points. Importantly, we leverage a robust probabilistic Perspective-n-Point (PnP) module, which are capable of fine-tuning 2D pairs and learning the pairs weight in an end-to-end manner. A comprehensive evaluation of our proposed algorithm is conducted in KITTI datasets. Furthermore, deploying the algorithm on the real-world parking lot scenario validates its strong practicality of the proposed algorithm. We highlight that this research offers a cost-effective and highly accurate solution that can be readily deployed in low-cost commercial vehicles. |
---|---|
ISSN: | 2642-7214 |
DOI: | 10.1109/IV55156.2024.10588575 |