Loading…
Robust Visual-Inertial-Wheel SLAM for Ground Robots in Complicated Scenes
Simultaneous Localization and Mapping (SLAM) in real-world scenes faces several challenges, including highly dynamic people and vehicles which make the static assumption of traditional SLAM no longer valid, and some indoor dim scenes which cause the loss of visual features. This paper presents a rob...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Simultaneous Localization and Mapping (SLAM) in real-world scenes faces several challenges, including highly dynamic people and vehicles which make the static assumption of traditional SLAM no longer valid, and some indoor dim scenes which cause the loss of visual features. This paper presents a robust visual-inertial-wheel SLAM algorithm for ground robots that can handle these challenges in complicated environments. The proposed algorithm introduces a lightweight object detection network as the front-end to remove dynamic features in real time, and develops a novel filter-based approach to enhance feature tracking. The back-end of the proposed algorithm is modeled as an optimization-based bundle adjustment problem which includes constraints from IMU, wheel, visual and ground measurement. The proposed method is extensively evaluated on OpenLORIS dataset that includes real-world complicated scenes to validate its effectiveness. |
---|---|
ISSN: | 2837-8601 |
DOI: | 10.1109/YAC63405.2024.10598676 |