Loading…
Can Semantic-based Filtering of Dynamic Objects improve Visual SLAM and Visual Odometry?
This work introduces a novel approach to improve robot perception in dynamic environments using Semantic Filtering. The goal is to enhance Visual Simultaneous Localization and Mapping (V-SLAM) and Visual Odometry (VO) tasks by excluding feature points associated with moving objects. Four different a...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This work introduces a novel approach to improve robot perception in dynamic environments using Semantic Filtering. The goal is to enhance Visual Simultaneous Localization and Mapping (V-SLAM) and Visual Odometry (VO) tasks by excluding feature points associated with moving objects. Four different approaches for semantic extraction, namely YOLOv3, DeepLabv3 with two different backbones, and Mask R-CNN, were evaluated. The framework was tested on various datasets, including KITTI, TUM and a simulated sequence generated on AirSim. The results demonstrated that the proposed semantic filtering significantly reduced estimation errors in VO tasks, with average error reduction ranging from 2.81% to 15.98%, while the results for V-SLAM were similar to the base work, especially for sequences with detected loops. Although fewer keypoints are used, the estimations benefit from the points excluded in VO. More experiments are needed to address the effects in VSLAM due to the presence of loop closure and the nature of the datasets. |
---|---|
ISSN: | 2643-685X |
DOI: | 10.1109/LARS/SBR/WRE59448.2023.10332956 |