Loading…

Visual-inertial simultaneous localization and mapping: Dynamically fused point-line feature extraction and engineered robotic applications

For robots in unknown environments, the visual-inertial SLAM (abbreviates 'simultaneous localization and mapping') incorporates the multi-sensor data and enables a globally consistent trajectory tracking and mapping. Given the scene structures in the SLAM problem, the high-quality lines ca...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-1
Main Authors: Xia, Linlin, Meng, Deang, Zhang, Jingjing, Zhang, Daochang, Hu, Zhiqi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:For robots in unknown environments, the visual-inertial SLAM (abbreviates 'simultaneous localization and mapping') incorporates the multi-sensor data and enables a globally consistent trajectory tracking and mapping. Given the scene structures in the SLAM problem, the high-quality lines can serve as primitives and provide additional constraints for camera pose estimation. This study is devoted to a pipeline design of dynamically fused point-line VINS (abbreviates 'visual-inertial navigation system'). The design builds on the heritage of VINS-mono but employs an ensemble pose estimation strategy to integrate point and line features. As far as we know, it is the first optimization-based monocular VINS method that gives an insight into the quantitative relationship between point and line feature numbers. Additionally, the measured pixels for line feature-length determination and two hidden parameters in LSD (abbreviates 'line segment detector') extractor are investigated, guaranteeing the line feature extraction optimization in both quality and efficiency. The benchmark dataset and real-world tests are conducted. In terms of challenging scene tests in the EuRoc dataset, results are compared against ROVIO, VINS-mono and PL-VINS frameworks, revealing our design is substantially more accurate and robust than others with both locally and globally consistent pose estimates. The engineered algorithm in the camera 'Mynteye' further enables the Bulldog-CX Robot platform verification. The results concerning the drifts, time taken for line feature extraction, and real-world trajectory consistency show our design outperforms the point-based VINS. The strong adaptability of our dynamically fused point-line VINS to the broad range of indoor and outdoor scenes is simultaneously verified.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2022.3198724