Loading…
Vision and Wi-Fi fusion in probabilistic appearance-based localization
This article introduces an indoor topological localization algorithm that uses vision and Wi-Fi signals. Its main contribution is a novel way of merging data from these sensors. The designed system does not require knowledge of the building plan or the positions of the Wi-Fi access points. By making...
Saved in:
Published in: | The International journal of robotics research 2022-06, Vol.41 (7), p.721-738 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c343t-4d6bc8d37603f84a8de0196a0b2032a1bf9e436ea70d54516066ccb0c8f158403 |
---|---|
cites | cdi_FETCH-LOGICAL-c343t-4d6bc8d37603f84a8de0196a0b2032a1bf9e436ea70d54516066ccb0c8f158403 |
container_end_page | 738 |
container_issue | 7 |
container_start_page | 721 |
container_title | The International journal of robotics research |
container_volume | 41 |
creator | Nowakowski, Mathieu Joly, Cyril Dalibard, Sébastien Garcia, Nicolas Moutarde, Fabien |
description | This article introduces an indoor topological localization algorithm that uses vision and Wi-Fi signals. Its main contribution is a novel way of merging data from these sensors. The designed system does not require knowledge of the building plan or the positions of the Wi-Fi access points. By making the Wi-Fi signature suited to the FABMAP algorithm, this work develops an early fusion framework that solves global localization and kidnapped robot problems. The resulting algorithm has been tested and compared with FABMAP visual localization, over data acquired by a Pepper robot in three different environments: an office building, a middle school, and a private apartment. Numerous runs of different robots have been realized over several months for a total covered distance of 6.4 km. Constraints were applied during acquisitions to make the experiments fitted to real use cases of Pepper robots. Without any tuning, our early fusion framework outperforms visual localization in all testing situations and with a significant margin in environments where vision faces problems such as moving objects or perceptual aliasing. In such conditions, 90.6% of estimated localizations are less than 5 m away from ground truth with our early fusion framework compared with 77.6% with visual localization. Furthermore, compared with other classical fusion strategies, the early fusion framework produces the best localization results because in all tested situations, it improves visual localization results without damaging them where Wi-Fi signals carry little information. |
doi_str_mv | 10.1177/0278364920910485 |
format | article |
fullrecord | <record><control><sourceid>proquest_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_02888255v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_0278364920910485</sage_id><sourcerecordid>2705333579</sourcerecordid><originalsourceid>FETCH-LOGICAL-c343t-4d6bc8d37603f84a8de0196a0b2032a1bf9e436ea70d54516066ccb0c8f158403</originalsourceid><addsrcrecordid>eNp1kM1LxDAQxYMouK7ePRY8eYhOms8el8V1hQUvfhzDNE01S21r0xX0r7e1oiB4Gnjv9x4zQ8gpgwvGtL6EVBuuRJZCxkAYuUdmTAtGOdNqn8xGm47-ITmKcQsAXEE2I6uHEENTJ1gXyWOgq5CUuy8h1EnbNTnmoQqxDy7BtvXYYe08zTH6Iqkah1X4wH7Aj8lBiVX0J99zTu5XV3fLNd3cXt8sFxvquOA9FYXKnSm4VsBLI9AUHlimEPIUeIosLzMvuPKooZBCMgVKOZeDMyWTRgCfk_Op9xkr23bhBbt322Cw68XGjhqkxphUyjc2sGcTO9zxuvOxt9tm19XDejbVIDnnUmcDBRPluibGzpc_tQzs-Fn797NDhE6RiE_-t_Rf_hPStXXl</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2705333579</pqid></control><display><type>article</type><title>Vision and Wi-Fi fusion in probabilistic appearance-based localization</title><source>Sage Journals Online</source><creator>Nowakowski, Mathieu ; Joly, Cyril ; Dalibard, Sébastien ; Garcia, Nicolas ; Moutarde, Fabien</creator><creatorcontrib>Nowakowski, Mathieu ; Joly, Cyril ; Dalibard, Sébastien ; Garcia, Nicolas ; Moutarde, Fabien</creatorcontrib><description>This article introduces an indoor topological localization algorithm that uses vision and Wi-Fi signals. Its main contribution is a novel way of merging data from these sensors. The designed system does not require knowledge of the building plan or the positions of the Wi-Fi access points. By making the Wi-Fi signature suited to the FABMAP algorithm, this work develops an early fusion framework that solves global localization and kidnapped robot problems. The resulting algorithm has been tested and compared with FABMAP visual localization, over data acquired by a Pepper robot in three different environments: an office building, a middle school, and a private apartment. Numerous runs of different robots have been realized over several months for a total covered distance of 6.4 km. Constraints were applied during acquisitions to make the experiments fitted to real use cases of Pepper robots. Without any tuning, our early fusion framework outperforms visual localization in all testing situations and with a significant margin in environments where vision faces problems such as moving objects or perceptual aliasing. In such conditions, 90.6% of estimated localizations are less than 5 m away from ground truth with our early fusion framework compared with 77.6% with visual localization. Furthermore, compared with other classical fusion strategies, the early fusion framework produces the best localization results because in all tested situations, it improves visual localization results without damaging them where Wi-Fi signals carry little information.</description><identifier>ISSN: 0278-3649</identifier><identifier>EISSN: 1741-3176</identifier><identifier>DOI: 10.1177/0278364920910485</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Algorithms ; Computer Science ; Damage localization ; Data acquisition ; Localization ; Office buildings ; Robotics ; Robots ; Vision ; Visual signals</subject><ispartof>The International journal of robotics research, 2022-06, Vol.41 (7), p.721-738</ispartof><rights>The Author(s) 2020</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c343t-4d6bc8d37603f84a8de0196a0b2032a1bf9e436ea70d54516066ccb0c8f158403</citedby><cites>FETCH-LOGICAL-c343t-4d6bc8d37603f84a8de0196a0b2032a1bf9e436ea70d54516066ccb0c8f158403</cites><orcidid>0000-0003-4799-7285 ; 0000-0002-2899-0179</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27923,27924,79135</link.rule.ids><backlink>$$Uhttps://minesparis-psl.hal.science/hal-02888255$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Nowakowski, Mathieu</creatorcontrib><creatorcontrib>Joly, Cyril</creatorcontrib><creatorcontrib>Dalibard, Sébastien</creatorcontrib><creatorcontrib>Garcia, Nicolas</creatorcontrib><creatorcontrib>Moutarde, Fabien</creatorcontrib><title>Vision and Wi-Fi fusion in probabilistic appearance-based localization</title><title>The International journal of robotics research</title><description>This article introduces an indoor topological localization algorithm that uses vision and Wi-Fi signals. Its main contribution is a novel way of merging data from these sensors. The designed system does not require knowledge of the building plan or the positions of the Wi-Fi access points. By making the Wi-Fi signature suited to the FABMAP algorithm, this work develops an early fusion framework that solves global localization and kidnapped robot problems. The resulting algorithm has been tested and compared with FABMAP visual localization, over data acquired by a Pepper robot in three different environments: an office building, a middle school, and a private apartment. Numerous runs of different robots have been realized over several months for a total covered distance of 6.4 km. Constraints were applied during acquisitions to make the experiments fitted to real use cases of Pepper robots. Without any tuning, our early fusion framework outperforms visual localization in all testing situations and with a significant margin in environments where vision faces problems such as moving objects or perceptual aliasing. In such conditions, 90.6% of estimated localizations are less than 5 m away from ground truth with our early fusion framework compared with 77.6% with visual localization. Furthermore, compared with other classical fusion strategies, the early fusion framework produces the best localization results because in all tested situations, it improves visual localization results without damaging them where Wi-Fi signals carry little information.</description><subject>Algorithms</subject><subject>Computer Science</subject><subject>Damage localization</subject><subject>Data acquisition</subject><subject>Localization</subject><subject>Office buildings</subject><subject>Robotics</subject><subject>Robots</subject><subject>Vision</subject><subject>Visual signals</subject><issn>0278-3649</issn><issn>1741-3176</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp1kM1LxDAQxYMouK7ePRY8eYhOms8el8V1hQUvfhzDNE01S21r0xX0r7e1oiB4Gnjv9x4zQ8gpgwvGtL6EVBuuRJZCxkAYuUdmTAtGOdNqn8xGm47-ITmKcQsAXEE2I6uHEENTJ1gXyWOgq5CUuy8h1EnbNTnmoQqxDy7BtvXYYe08zTH6Iqkah1X4wH7Aj8lBiVX0J99zTu5XV3fLNd3cXt8sFxvquOA9FYXKnSm4VsBLI9AUHlimEPIUeIosLzMvuPKooZBCMgVKOZeDMyWTRgCfk_Op9xkr23bhBbt322Cw68XGjhqkxphUyjc2sGcTO9zxuvOxt9tm19XDejbVIDnnUmcDBRPluibGzpc_tQzs-Fn797NDhE6RiE_-t_Rf_hPStXXl</recordid><startdate>20220601</startdate><enddate>20220601</enddate><creator>Nowakowski, Mathieu</creator><creator>Joly, Cyril</creator><creator>Dalibard, Sébastien</creator><creator>Garcia, Nicolas</creator><creator>Moutarde, Fabien</creator><general>SAGE Publications</general><general>SAGE PUBLICATIONS, INC</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0003-4799-7285</orcidid><orcidid>https://orcid.org/0000-0002-2899-0179</orcidid></search><sort><creationdate>20220601</creationdate><title>Vision and Wi-Fi fusion in probabilistic appearance-based localization</title><author>Nowakowski, Mathieu ; Joly, Cyril ; Dalibard, Sébastien ; Garcia, Nicolas ; Moutarde, Fabien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c343t-4d6bc8d37603f84a8de0196a0b2032a1bf9e436ea70d54516066ccb0c8f158403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Computer Science</topic><topic>Damage localization</topic><topic>Data acquisition</topic><topic>Localization</topic><topic>Office buildings</topic><topic>Robotics</topic><topic>Robots</topic><topic>Vision</topic><topic>Visual signals</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nowakowski, Mathieu</creatorcontrib><creatorcontrib>Joly, Cyril</creatorcontrib><creatorcontrib>Dalibard, Sébastien</creatorcontrib><creatorcontrib>Garcia, Nicolas</creatorcontrib><creatorcontrib>Moutarde, Fabien</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>The International journal of robotics research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nowakowski, Mathieu</au><au>Joly, Cyril</au><au>Dalibard, Sébastien</au><au>Garcia, Nicolas</au><au>Moutarde, Fabien</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vision and Wi-Fi fusion in probabilistic appearance-based localization</atitle><jtitle>The International journal of robotics research</jtitle><date>2022-06-01</date><risdate>2022</risdate><volume>41</volume><issue>7</issue><spage>721</spage><epage>738</epage><pages>721-738</pages><issn>0278-3649</issn><eissn>1741-3176</eissn><abstract>This article introduces an indoor topological localization algorithm that uses vision and Wi-Fi signals. Its main contribution is a novel way of merging data from these sensors. The designed system does not require knowledge of the building plan or the positions of the Wi-Fi access points. By making the Wi-Fi signature suited to the FABMAP algorithm, this work develops an early fusion framework that solves global localization and kidnapped robot problems. The resulting algorithm has been tested and compared with FABMAP visual localization, over data acquired by a Pepper robot in three different environments: an office building, a middle school, and a private apartment. Numerous runs of different robots have been realized over several months for a total covered distance of 6.4 km. Constraints were applied during acquisitions to make the experiments fitted to real use cases of Pepper robots. Without any tuning, our early fusion framework outperforms visual localization in all testing situations and with a significant margin in environments where vision faces problems such as moving objects or perceptual aliasing. In such conditions, 90.6% of estimated localizations are less than 5 m away from ground truth with our early fusion framework compared with 77.6% with visual localization. Furthermore, compared with other classical fusion strategies, the early fusion framework produces the best localization results because in all tested situations, it improves visual localization results without damaging them where Wi-Fi signals carry little information.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/0278364920910485</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0003-4799-7285</orcidid><orcidid>https://orcid.org/0000-0002-2899-0179</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0278-3649 |
ispartof | The International journal of robotics research, 2022-06, Vol.41 (7), p.721-738 |
issn | 0278-3649 1741-3176 |
language | eng |
recordid | cdi_hal_primary_oai_HAL_hal_02888255v1 |
source | Sage Journals Online |
subjects | Algorithms Computer Science Damage localization Data acquisition Localization Office buildings Robotics Robots Vision Visual signals |
title | Vision and Wi-Fi fusion in probabilistic appearance-based localization |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T09%3A45%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vision%20and%20Wi-Fi%20fusion%20in%20probabilistic%20appearance-based%20localization&rft.jtitle=The%20International%20journal%20of%20robotics%20research&rft.au=Nowakowski,%20Mathieu&rft.date=2022-06-01&rft.volume=41&rft.issue=7&rft.spage=721&rft.epage=738&rft.pages=721-738&rft.issn=0278-3649&rft.eissn=1741-3176&rft_id=info:doi/10.1177/0278364920910485&rft_dat=%3Cproquest_hal_p%3E2705333579%3C/proquest_hal_p%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c343t-4d6bc8d37603f84a8de0196a0b2032a1bf9e436ea70d54516066ccb0c8f158403%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2705333579&rft_id=info:pmid/&rft_sage_id=10.1177_0278364920910485&rfr_iscdi=true |