Loading…

Can Semantic-based Filtering of Dynamic Objects improve Visual SLAM and Visual Odometry?

This work introduces a novel approach to improve robot perception in dynamic environments using Semantic Filtering. The goal is to enhance Visual Simultaneous Localization and Mapping (V-SLAM) and Visual Odometry (VO) tasks by excluding feature points associated with moving objects. Four different a...

Full description

Saved in:
Bibliographic Details
Main Authors: Costa, Leonardo Rezende, Colombini, Esther Luna
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 572
container_issue
container_start_page 567
container_title
container_volume
creator Costa, Leonardo Rezende
Colombini, Esther Luna
description This work introduces a novel approach to improve robot perception in dynamic environments using Semantic Filtering. The goal is to enhance Visual Simultaneous Localization and Mapping (V-SLAM) and Visual Odometry (VO) tasks by excluding feature points associated with moving objects. Four different approaches for semantic extraction, namely YOLOv3, DeepLabv3 with two different backbones, and Mask R-CNN, were evaluated. The framework was tested on various datasets, including KITTI, TUM and a simulated sequence generated on AirSim. The results demonstrated that the proposed semantic filtering significantly reduced estimation errors in VO tasks, with average error reduction ranging from 2.81% to 15.98%, while the results for V-SLAM were similar to the base work, especially for sequences with detected loops. Although fewer keypoints are used, the estimations benefit from the points excluded in VO. More experiments are needed to address the effects in VSLAM due to the presence of loop closure and the nature of the datasets.
doi_str_mv 10.1109/LARS/SBR/WRE59448.2023.10332956
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10332956</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10332956</ieee_id><sourcerecordid>10332956</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-b5fbf053b80267dc8c939799fbba560f7a6a81d909a246af3e2ca63135b866433</originalsourceid><addsrcrecordid>eNo1kE1LAzEURaMgWGr_gYvsXE37kjfJJCuptVVhpNDxo7vyMpNISmdaZkah_96CdnU5m8u5l7E7AWMhwE7y6aqYFA-ryedqrmyamrEEiWMBiNIqfcFGNrMGFaBQaLJLNpA6xUQbtb5mo67bAgBKSAGyAVvPqOGFr6npY5k46nzFF3HX-zY2X3wf-OOxoTqWfOm2vuw7HutDu__x_CN237TjRT595dRUZ15W-9r37fH-hl0F2nV-9J9D9r6Yv82ek3z59DKb5kk8KfSJU8EFUOgMSJ1VpSktnvRtcI6UhpCRJiMqC5ZkqimglyVpFKic0adZOGS3f73Re785tLGm9rg5n4G_93BViA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Can Semantic-based Filtering of Dynamic Objects improve Visual SLAM and Visual Odometry?</title><source>IEEE Xplore All Conference Series</source><creator>Costa, Leonardo Rezende ; Colombini, Esther Luna</creator><creatorcontrib>Costa, Leonardo Rezende ; Colombini, Esther Luna</creatorcontrib><description>This work introduces a novel approach to improve robot perception in dynamic environments using Semantic Filtering. The goal is to enhance Visual Simultaneous Localization and Mapping (V-SLAM) and Visual Odometry (VO) tasks by excluding feature points associated with moving objects. Four different approaches for semantic extraction, namely YOLOv3, DeepLabv3 with two different backbones, and Mask R-CNN, were evaluated. The framework was tested on various datasets, including KITTI, TUM and a simulated sequence generated on AirSim. The results demonstrated that the proposed semantic filtering significantly reduced estimation errors in VO tasks, with average error reduction ranging from 2.81% to 15.98%, while the results for V-SLAM were similar to the base work, especially for sequences with detected loops. Although fewer keypoints are used, the estimations benefit from the points excluded in VO. More experiments are needed to address the effects in VSLAM due to the presence of loop closure and the nature of the datasets.</description><identifier>EISSN: 2643-685X</identifier><identifier>EISBN: 9798350315387</identifier><identifier>DOI: 10.1109/LARS/SBR/WRE59448.2023.10332956</identifier><language>eng</language><publisher>IEEE</publisher><subject>Feature extraction ; Filtering ; Semantic ; Semantics ; Simultaneous localization and mapping ; Trajectory ; V-SLAM ; Vehicle dynamics ; Visual Odometry ; Visualization</subject><ispartof>2023 Latin American Robotics Symposium (LARS), 2023 Brazilian Symposium on Robotics (SBR), and 2023 Workshop on Robotics in Education (WRE), 2023, p.567-572</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10332956$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10332956$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Costa, Leonardo Rezende</creatorcontrib><creatorcontrib>Colombini, Esther Luna</creatorcontrib><title>Can Semantic-based Filtering of Dynamic Objects improve Visual SLAM and Visual Odometry?</title><title>2023 Latin American Robotics Symposium (LARS), 2023 Brazilian Symposium on Robotics (SBR), and 2023 Workshop on Robotics in Education (WRE)</title><addtitle>LARS/SBR/WRE</addtitle><description>This work introduces a novel approach to improve robot perception in dynamic environments using Semantic Filtering. The goal is to enhance Visual Simultaneous Localization and Mapping (V-SLAM) and Visual Odometry (VO) tasks by excluding feature points associated with moving objects. Four different approaches for semantic extraction, namely YOLOv3, DeepLabv3 with two different backbones, and Mask R-CNN, were evaluated. The framework was tested on various datasets, including KITTI, TUM and a simulated sequence generated on AirSim. The results demonstrated that the proposed semantic filtering significantly reduced estimation errors in VO tasks, with average error reduction ranging from 2.81% to 15.98%, while the results for V-SLAM were similar to the base work, especially for sequences with detected loops. Although fewer keypoints are used, the estimations benefit from the points excluded in VO. More experiments are needed to address the effects in VSLAM due to the presence of loop closure and the nature of the datasets.</description><subject>Feature extraction</subject><subject>Filtering</subject><subject>Semantic</subject><subject>Semantics</subject><subject>Simultaneous localization and mapping</subject><subject>Trajectory</subject><subject>V-SLAM</subject><subject>Vehicle dynamics</subject><subject>Visual Odometry</subject><subject>Visualization</subject><issn>2643-685X</issn><isbn>9798350315387</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1kE1LAzEURaMgWGr_gYvsXE37kjfJJCuptVVhpNDxo7vyMpNISmdaZkah_96CdnU5m8u5l7E7AWMhwE7y6aqYFA-ryedqrmyamrEEiWMBiNIqfcFGNrMGFaBQaLJLNpA6xUQbtb5mo67bAgBKSAGyAVvPqOGFr6npY5k46nzFF3HX-zY2X3wf-OOxoTqWfOm2vuw7HutDu__x_CN237TjRT595dRUZ15W-9r37fH-hl0F2nV-9J9D9r6Yv82ek3z59DKb5kk8KfSJU8EFUOgMSJ1VpSktnvRtcI6UhpCRJiMqC5ZkqimglyVpFKic0adZOGS3f73Re785tLGm9rg5n4G_93BViA</recordid><startdate>20231009</startdate><enddate>20231009</enddate><creator>Costa, Leonardo Rezende</creator><creator>Colombini, Esther Luna</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20231009</creationdate><title>Can Semantic-based Filtering of Dynamic Objects improve Visual SLAM and Visual Odometry?</title><author>Costa, Leonardo Rezende ; Colombini, Esther Luna</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-b5fbf053b80267dc8c939799fbba560f7a6a81d909a246af3e2ca63135b866433</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Feature extraction</topic><topic>Filtering</topic><topic>Semantic</topic><topic>Semantics</topic><topic>Simultaneous localization and mapping</topic><topic>Trajectory</topic><topic>V-SLAM</topic><topic>Vehicle dynamics</topic><topic>Visual Odometry</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Costa, Leonardo Rezende</creatorcontrib><creatorcontrib>Colombini, Esther Luna</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Costa, Leonardo Rezende</au><au>Colombini, Esther Luna</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Can Semantic-based Filtering of Dynamic Objects improve Visual SLAM and Visual Odometry?</atitle><btitle>2023 Latin American Robotics Symposium (LARS), 2023 Brazilian Symposium on Robotics (SBR), and 2023 Workshop on Robotics in Education (WRE)</btitle><stitle>LARS/SBR/WRE</stitle><date>2023-10-09</date><risdate>2023</risdate><spage>567</spage><epage>572</epage><pages>567-572</pages><eissn>2643-685X</eissn><eisbn>9798350315387</eisbn><abstract>This work introduces a novel approach to improve robot perception in dynamic environments using Semantic Filtering. The goal is to enhance Visual Simultaneous Localization and Mapping (V-SLAM) and Visual Odometry (VO) tasks by excluding feature points associated with moving objects. Four different approaches for semantic extraction, namely YOLOv3, DeepLabv3 with two different backbones, and Mask R-CNN, were evaluated. The framework was tested on various datasets, including KITTI, TUM and a simulated sequence generated on AirSim. The results demonstrated that the proposed semantic filtering significantly reduced estimation errors in VO tasks, with average error reduction ranging from 2.81% to 15.98%, while the results for V-SLAM were similar to the base work, especially for sequences with detected loops. Although fewer keypoints are used, the estimations benefit from the points excluded in VO. More experiments are needed to address the effects in VSLAM due to the presence of loop closure and the nature of the datasets.</abstract><pub>IEEE</pub><doi>10.1109/LARS/SBR/WRE59448.2023.10332956</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2643-685X
ispartof 2023 Latin American Robotics Symposium (LARS), 2023 Brazilian Symposium on Robotics (SBR), and 2023 Workshop on Robotics in Education (WRE), 2023, p.567-572
issn 2643-685X
language eng
recordid cdi_ieee_primary_10332956
source IEEE Xplore All Conference Series
subjects Feature extraction
Filtering
Semantic
Semantics
Simultaneous localization and mapping
Trajectory
V-SLAM
Vehicle dynamics
Visual Odometry
Visualization
title Can Semantic-based Filtering of Dynamic Objects improve Visual SLAM and Visual Odometry?
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T00%3A09%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Can%20Semantic-based%20Filtering%20of%20Dynamic%20Objects%20improve%20Visual%20SLAM%20and%20Visual%20Odometry?&rft.btitle=2023%20Latin%20American%20Robotics%20Symposium%20(LARS),%202023%20Brazilian%20Symposium%20on%20Robotics%20(SBR),%20and%202023%20Workshop%20on%20Robotics%20in%20Education%20(WRE)&rft.au=Costa,%20Leonardo%20Rezende&rft.date=2023-10-09&rft.spage=567&rft.epage=572&rft.pages=567-572&rft.eissn=2643-685X&rft_id=info:doi/10.1109/LARS/SBR/WRE59448.2023.10332956&rft.eisbn=9798350315387&rft_dat=%3Cieee_CHZPO%3E10332956%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-b5fbf053b80267dc8c939799fbba560f7a6a81d909a246af3e2ca63135b866433%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10332956&rfr_iscdi=true