Loading…

Real-time visual SLAM based YOLO-Fastest for dynamic scenes

Within the realm of autonomous robotic navigation, simultaneous localization and mapping (SLAM) serves as a critical perception technology, drawing heightened attention in contemporary research. The traditional SLAM systems perform well in static environments, but in the real physical world, dynamic...

Full description

Saved in:
Bibliographic Details
Published in:Measurement science & technology 2024-05, Vol.35 (5), p.56305
Main Authors: Gong, Can, Sun, Ying, Zou, Chunlong, Tao, Bo, Huang, Li, Fang, Zifan, Tang, Dalai
Format: Article
Language:English
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c238t-dbd0dbb6af06fcf13dcdb2632e495de541ec5bd6583426769292766889a1d523
container_end_page
container_issue 5
container_start_page 56305
container_title Measurement science & technology
container_volume 35
creator Gong, Can
Sun, Ying
Zou, Chunlong
Tao, Bo
Huang, Li
Fang, Zifan
Tang, Dalai
description Within the realm of autonomous robotic navigation, simultaneous localization and mapping (SLAM) serves as a critical perception technology, drawing heightened attention in contemporary research. The traditional SLAM systems perform well in static environments, but in the real physical world, dynamic objects can destroy the static geometric constraints of the SLAM system, further limiting its practical application in the real world. In this paper, a robust dynamic RGB-D SLAM system is proposed to expand the number of static points in the scene by combining with YOLO-Fastest to ensure the effectiveness of the geometric constraints model construction, and then based on that, a new thresholding model is designed to differentiate the dynamic features in the objection bounding box, which takes advantage of the double polyline constraints and the residuals after reprojection to filter the dynamic feature points. In addition, two Gaussian models are constructed to segment the moving objects in the bounding box in the depth image to achieve the effect similar to the instance segmentation under the premise of ensuring the computational speed. In this paper, experiments are conducted on dynamic sequences provided by the TUM dataset to evaluate the performance of the proposed method, and the results show that the root mean squared error metric of the absolute trajectory error of the algorithm of this paper has at least 80% improvement compared to ORB-SLAM2. Higher robustness in dynamic environments with both high and low dynamic sequences compared to DS-SLAM and Dynaslam, and can effectively provide intelligent localization and navigation for mobile robots.
doi_str_mv 10.1088/1361-6501/ad2669
format article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1088_1361_6501_ad2669</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1088_1361_6501_ad2669</sourcerecordid><originalsourceid>FETCH-LOGICAL-c238t-dbd0dbb6af06fcf13dcdb2632e495de541ec5bd6583426769292766889a1d523</originalsourceid><addsrcrecordid>eNo9z89LwzAYxvEgCs7p3WP-gbg3SfM2wdMYzgmVgu7iKeQnVNpNmirsv5cy8fTA9_DAh5B7Dg8ctF5xiZyhAr5yUSCaC7L4T5dkAUbVDISU1-SmlE8AqMGYBXl8S65nUzck-tOVb9fT92b9Sr0rKdKPtmnZ1pUplYnm40jj6eCGLtAS0iGVW3KVXV_S3d8uyX77tN_sWNM-v2zWDQtC6olFHyF6jy4D5pC5jCF6gVKkyqiYVMVTUD6i0rISWKMRRtSIWhvHoxJySeB8G8ZjKWPK9mvsBjeeLAc72-0MtTPUnu3yFzmoS3o</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Real-time visual SLAM based YOLO-Fastest for dynamic scenes</title><source>Institute of Physics:Jisc Collections:IOP Publishing Read and Publish 2024-2025 (Reading List)</source><creator>Gong, Can ; Sun, Ying ; Zou, Chunlong ; Tao, Bo ; Huang, Li ; Fang, Zifan ; Tang, Dalai</creator><creatorcontrib>Gong, Can ; Sun, Ying ; Zou, Chunlong ; Tao, Bo ; Huang, Li ; Fang, Zifan ; Tang, Dalai</creatorcontrib><description>Within the realm of autonomous robotic navigation, simultaneous localization and mapping (SLAM) serves as a critical perception technology, drawing heightened attention in contemporary research. The traditional SLAM systems perform well in static environments, but in the real physical world, dynamic objects can destroy the static geometric constraints of the SLAM system, further limiting its practical application in the real world. In this paper, a robust dynamic RGB-D SLAM system is proposed to expand the number of static points in the scene by combining with YOLO-Fastest to ensure the effectiveness of the geometric constraints model construction, and then based on that, a new thresholding model is designed to differentiate the dynamic features in the objection bounding box, which takes advantage of the double polyline constraints and the residuals after reprojection to filter the dynamic feature points. In addition, two Gaussian models are constructed to segment the moving objects in the bounding box in the depth image to achieve the effect similar to the instance segmentation under the premise of ensuring the computational speed. In this paper, experiments are conducted on dynamic sequences provided by the TUM dataset to evaluate the performance of the proposed method, and the results show that the root mean squared error metric of the absolute trajectory error of the algorithm of this paper has at least 80% improvement compared to ORB-SLAM2. Higher robustness in dynamic environments with both high and low dynamic sequences compared to DS-SLAM and Dynaslam, and can effectively provide intelligent localization and navigation for mobile robots.</description><identifier>ISSN: 0957-0233</identifier><identifier>EISSN: 1361-6501</identifier><identifier>DOI: 10.1088/1361-6501/ad2669</identifier><language>eng</language><ispartof>Measurement science &amp; technology, 2024-05, Vol.35 (5), p.56305</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c238t-dbd0dbb6af06fcf13dcdb2632e495de541ec5bd6583426769292766889a1d523</cites><orcidid>0009-0002-1100-654X ; 0000-0002-5886-1992</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids></links><search><creatorcontrib>Gong, Can</creatorcontrib><creatorcontrib>Sun, Ying</creatorcontrib><creatorcontrib>Zou, Chunlong</creatorcontrib><creatorcontrib>Tao, Bo</creatorcontrib><creatorcontrib>Huang, Li</creatorcontrib><creatorcontrib>Fang, Zifan</creatorcontrib><creatorcontrib>Tang, Dalai</creatorcontrib><title>Real-time visual SLAM based YOLO-Fastest for dynamic scenes</title><title>Measurement science &amp; technology</title><description>Within the realm of autonomous robotic navigation, simultaneous localization and mapping (SLAM) serves as a critical perception technology, drawing heightened attention in contemporary research. The traditional SLAM systems perform well in static environments, but in the real physical world, dynamic objects can destroy the static geometric constraints of the SLAM system, further limiting its practical application in the real world. In this paper, a robust dynamic RGB-D SLAM system is proposed to expand the number of static points in the scene by combining with YOLO-Fastest to ensure the effectiveness of the geometric constraints model construction, and then based on that, a new thresholding model is designed to differentiate the dynamic features in the objection bounding box, which takes advantage of the double polyline constraints and the residuals after reprojection to filter the dynamic feature points. In addition, two Gaussian models are constructed to segment the moving objects in the bounding box in the depth image to achieve the effect similar to the instance segmentation under the premise of ensuring the computational speed. In this paper, experiments are conducted on dynamic sequences provided by the TUM dataset to evaluate the performance of the proposed method, and the results show that the root mean squared error metric of the absolute trajectory error of the algorithm of this paper has at least 80% improvement compared to ORB-SLAM2. Higher robustness in dynamic environments with both high and low dynamic sequences compared to DS-SLAM and Dynaslam, and can effectively provide intelligent localization and navigation for mobile robots.</description><issn>0957-0233</issn><issn>1361-6501</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNo9z89LwzAYxvEgCs7p3WP-gbg3SfM2wdMYzgmVgu7iKeQnVNpNmirsv5cy8fTA9_DAh5B7Dg8ctF5xiZyhAr5yUSCaC7L4T5dkAUbVDISU1-SmlE8AqMGYBXl8S65nUzck-tOVb9fT92b9Sr0rKdKPtmnZ1pUplYnm40jj6eCGLtAS0iGVW3KVXV_S3d8uyX77tN_sWNM-v2zWDQtC6olFHyF6jy4D5pC5jCF6gVKkyqiYVMVTUD6i0rISWKMRRtSIWhvHoxJySeB8G8ZjKWPK9mvsBjeeLAc72-0MtTPUnu3yFzmoS3o</recordid><startdate>20240501</startdate><enddate>20240501</enddate><creator>Gong, Can</creator><creator>Sun, Ying</creator><creator>Zou, Chunlong</creator><creator>Tao, Bo</creator><creator>Huang, Li</creator><creator>Fang, Zifan</creator><creator>Tang, Dalai</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0009-0002-1100-654X</orcidid><orcidid>https://orcid.org/0000-0002-5886-1992</orcidid></search><sort><creationdate>20240501</creationdate><title>Real-time visual SLAM based YOLO-Fastest for dynamic scenes</title><author>Gong, Can ; Sun, Ying ; Zou, Chunlong ; Tao, Bo ; Huang, Li ; Fang, Zifan ; Tang, Dalai</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c238t-dbd0dbb6af06fcf13dcdb2632e495de541ec5bd6583426769292766889a1d523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gong, Can</creatorcontrib><creatorcontrib>Sun, Ying</creatorcontrib><creatorcontrib>Zou, Chunlong</creatorcontrib><creatorcontrib>Tao, Bo</creatorcontrib><creatorcontrib>Huang, Li</creatorcontrib><creatorcontrib>Fang, Zifan</creatorcontrib><creatorcontrib>Tang, Dalai</creatorcontrib><collection>CrossRef</collection><jtitle>Measurement science &amp; technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gong, Can</au><au>Sun, Ying</au><au>Zou, Chunlong</au><au>Tao, Bo</au><au>Huang, Li</au><au>Fang, Zifan</au><au>Tang, Dalai</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Real-time visual SLAM based YOLO-Fastest for dynamic scenes</atitle><jtitle>Measurement science &amp; technology</jtitle><date>2024-05-01</date><risdate>2024</risdate><volume>35</volume><issue>5</issue><spage>56305</spage><pages>56305-</pages><issn>0957-0233</issn><eissn>1361-6501</eissn><abstract>Within the realm of autonomous robotic navigation, simultaneous localization and mapping (SLAM) serves as a critical perception technology, drawing heightened attention in contemporary research. The traditional SLAM systems perform well in static environments, but in the real physical world, dynamic objects can destroy the static geometric constraints of the SLAM system, further limiting its practical application in the real world. In this paper, a robust dynamic RGB-D SLAM system is proposed to expand the number of static points in the scene by combining with YOLO-Fastest to ensure the effectiveness of the geometric constraints model construction, and then based on that, a new thresholding model is designed to differentiate the dynamic features in the objection bounding box, which takes advantage of the double polyline constraints and the residuals after reprojection to filter the dynamic feature points. In addition, two Gaussian models are constructed to segment the moving objects in the bounding box in the depth image to achieve the effect similar to the instance segmentation under the premise of ensuring the computational speed. In this paper, experiments are conducted on dynamic sequences provided by the TUM dataset to evaluate the performance of the proposed method, and the results show that the root mean squared error metric of the absolute trajectory error of the algorithm of this paper has at least 80% improvement compared to ORB-SLAM2. Higher robustness in dynamic environments with both high and low dynamic sequences compared to DS-SLAM and Dynaslam, and can effectively provide intelligent localization and navigation for mobile robots.</abstract><doi>10.1088/1361-6501/ad2669</doi><orcidid>https://orcid.org/0009-0002-1100-654X</orcidid><orcidid>https://orcid.org/0000-0002-5886-1992</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0957-0233
ispartof Measurement science & technology, 2024-05, Vol.35 (5), p.56305
issn 0957-0233
1361-6501
language eng
recordid cdi_crossref_primary_10_1088_1361_6501_ad2669
source Institute of Physics:Jisc Collections:IOP Publishing Read and Publish 2024-2025 (Reading List)
title Real-time visual SLAM based YOLO-Fastest for dynamic scenes
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T12%3A15%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Real-time%20visual%20SLAM%20based%20YOLO-Fastest%20for%20dynamic%20scenes&rft.jtitle=Measurement%20science%20&%20technology&rft.au=Gong,%20Can&rft.date=2024-05-01&rft.volume=35&rft.issue=5&rft.spage=56305&rft.pages=56305-&rft.issn=0957-0233&rft.eissn=1361-6501&rft_id=info:doi/10.1088/1361-6501/ad2669&rft_dat=%3Ccrossref%3E10_1088_1361_6501_ad2669%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c238t-dbd0dbb6af06fcf13dcdb2632e495de541ec5bd6583426769292766889a1d523%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true