Loading…

Two Efficient Visual Methods for Segment Self-localization

Localization is an essential step in visual navigation algorithms in robotics. Some visual navigation algorithms define the environment through sequential images, which are called visual path. The interval between each consecutive image is called a segment. One crucial step in these kinds of navigat...

Full description

Saved in:
Bibliographic Details
Published in:SN computer science 2021-04, Vol.2 (2), p.80, Article 80
Main Authors: Kassir, Mohamad Mahdi, Palhang, Maziar, Ahmadzadeh, Mohammad Reza
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c1150-6c1811de663a788ee600cd4f15127614e0ea93fa8eee3f8677b13b7bee0a71793
container_end_page
container_issue 2
container_start_page 80
container_title SN computer science
container_volume 2
creator Kassir, Mohamad Mahdi
Palhang, Maziar
Ahmadzadeh, Mohammad Reza
description Localization is an essential step in visual navigation algorithms in robotics. Some visual navigation algorithms define the environment through sequential images, which are called visual path. The interval between each consecutive image is called a segment. One crucial step in these kinds of navigation is to find the segment in which the robot is placed (segment self-localization). Visual segment self-localization methods consist of two stages. In the first stage, a feature matching between the current image of the robot with all the images that form the visual path is done. Usually, in this stage, outliers removal methods such as RANSAC are used after matching to remove the outliers matched features. In the second stage, a segment is chosen depending on the results of the first one. Segment self-localization methods estimate a segment depending just on the percentage of the matched features. This leads to estimate the segment incorrectly in some cases. In this paper, another parameter also is considered to estimate the segment. The parameter is based on the perspective projection model. Moreover, instead of RANSAC which is a stochastic and time consuming method, a more straightforward and more effective method is proposed for outliers detection. The proposed methods are tested on Karlsruhe dataset, and acceptable results are obtained. Also, the methods are compared with three reviewed methods by Nguyen et al. (J Intell Robot Syst 84:217, 2016). Although the proposed methods use a more straightforward outlier method, they give more accurate results.
doi_str_mv 10.1007/s42979-021-00492-0
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2938260210</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2938260210</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1150-6c1811de663a788ee600cd4f15127614e0ea93fa8eee3f8677b13b7bee0a71793</originalsourceid><addsrcrecordid>eNp9kEFLAzEQhYMoWGr_gKcFz9GZ7G6y8SalWqHioVW8hXQ7qVu2m5psEf31bruCN08z8L73hnmMXSJcI4C6iZnQSnMQyAEyLTicsIGQEnmhQZ0ed8G1zt_O2SjGDQCIHLJM5gN2u_j0ycS5qqyoaZPXKu5tnTxR--5XMXE-JHNabw_SnGrHa1_auvq2beWbC3bmbB1p9DuH7OV-shhP-ez54XF8N-MlYg5cllggrkjK1KqiIJIA5SpzmKNQEjMCsjp1tlModYVUaonpUi2JwCpUOh2yqz53F_zHnmJrNn4fmu6kETothOweh44SPVUGH2MgZ3ah2trwZRDMoSbT12Q62hxrMgdT2ptiBzdrCn_R_7h-AG7uaRU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2938260210</pqid></control><display><type>article</type><title>Two Efficient Visual Methods for Segment Self-localization</title><source>Springer Nature</source><creator>Kassir, Mohamad Mahdi ; Palhang, Maziar ; Ahmadzadeh, Mohammad Reza</creator><creatorcontrib>Kassir, Mohamad Mahdi ; Palhang, Maziar ; Ahmadzadeh, Mohammad Reza</creatorcontrib><description>Localization is an essential step in visual navigation algorithms in robotics. Some visual navigation algorithms define the environment through sequential images, which are called visual path. The interval between each consecutive image is called a segment. One crucial step in these kinds of navigation is to find the segment in which the robot is placed (segment self-localization). Visual segment self-localization methods consist of two stages. In the first stage, a feature matching between the current image of the robot with all the images that form the visual path is done. Usually, in this stage, outliers removal methods such as RANSAC are used after matching to remove the outliers matched features. In the second stage, a segment is chosen depending on the results of the first one. Segment self-localization methods estimate a segment depending just on the percentage of the matched features. This leads to estimate the segment incorrectly in some cases. In this paper, another parameter also is considered to estimate the segment. The parameter is based on the perspective projection model. Moreover, instead of RANSAC which is a stochastic and time consuming method, a more straightforward and more effective method is proposed for outliers detection. The proposed methods are tested on Karlsruhe dataset, and acceptable results are obtained. Also, the methods are compared with three reviewed methods by Nguyen et al. (J Intell Robot Syst 84:217, 2016). Although the proposed methods use a more straightforward outlier method, they give more accurate results.</description><identifier>ISSN: 2662-995X</identifier><identifier>EISSN: 2661-8907</identifier><identifier>DOI: 10.1007/s42979-021-00492-0</identifier><language>eng</language><publisher>Singapore: Springer Singapore</publisher><subject>Algorithms ; Computer Imaging ; Computer Science ; Computer Systems Organization and Communication Networks ; Data analysis ; Data Structures and Information Theory ; Information Systems and Communication Service ; Localization ; Matching ; Methods ; Navigation ; Original Research ; Outliers (statistics) ; Parameters ; Pattern Recognition and Graphics ; Projection model ; Robotics ; Robots ; Segments ; Software Engineering/Programming and Operating Systems ; Teaching ; Vision</subject><ispartof>SN computer science, 2021-04, Vol.2 (2), p.80, Article 80</ispartof><rights>The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1150-6c1811de663a788ee600cd4f15127614e0ea93fa8eee3f8677b13b7bee0a71793</cites><orcidid>0000-0001-6001-3469</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Kassir, Mohamad Mahdi</creatorcontrib><creatorcontrib>Palhang, Maziar</creatorcontrib><creatorcontrib>Ahmadzadeh, Mohammad Reza</creatorcontrib><title>Two Efficient Visual Methods for Segment Self-localization</title><title>SN computer science</title><addtitle>SN COMPUT. SCI</addtitle><description>Localization is an essential step in visual navigation algorithms in robotics. Some visual navigation algorithms define the environment through sequential images, which are called visual path. The interval between each consecutive image is called a segment. One crucial step in these kinds of navigation is to find the segment in which the robot is placed (segment self-localization). Visual segment self-localization methods consist of two stages. In the first stage, a feature matching between the current image of the robot with all the images that form the visual path is done. Usually, in this stage, outliers removal methods such as RANSAC are used after matching to remove the outliers matched features. In the second stage, a segment is chosen depending on the results of the first one. Segment self-localization methods estimate a segment depending just on the percentage of the matched features. This leads to estimate the segment incorrectly in some cases. In this paper, another parameter also is considered to estimate the segment. The parameter is based on the perspective projection model. Moreover, instead of RANSAC which is a stochastic and time consuming method, a more straightforward and more effective method is proposed for outliers detection. The proposed methods are tested on Karlsruhe dataset, and acceptable results are obtained. Also, the methods are compared with three reviewed methods by Nguyen et al. (J Intell Robot Syst 84:217, 2016). Although the proposed methods use a more straightforward outlier method, they give more accurate results.</description><subject>Algorithms</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Computer Systems Organization and Communication Networks</subject><subject>Data analysis</subject><subject>Data Structures and Information Theory</subject><subject>Information Systems and Communication Service</subject><subject>Localization</subject><subject>Matching</subject><subject>Methods</subject><subject>Navigation</subject><subject>Original Research</subject><subject>Outliers (statistics)</subject><subject>Parameters</subject><subject>Pattern Recognition and Graphics</subject><subject>Projection model</subject><subject>Robotics</subject><subject>Robots</subject><subject>Segments</subject><subject>Software Engineering/Programming and Operating Systems</subject><subject>Teaching</subject><subject>Vision</subject><issn>2662-995X</issn><issn>2661-8907</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kEFLAzEQhYMoWGr_gKcFz9GZ7G6y8SalWqHioVW8hXQ7qVu2m5psEf31bruCN08z8L73hnmMXSJcI4C6iZnQSnMQyAEyLTicsIGQEnmhQZ0ed8G1zt_O2SjGDQCIHLJM5gN2u_j0ycS5qqyoaZPXKu5tnTxR--5XMXE-JHNabw_SnGrHa1_auvq2beWbC3bmbB1p9DuH7OV-shhP-ez54XF8N-MlYg5cllggrkjK1KqiIJIA5SpzmKNQEjMCsjp1tlModYVUaonpUi2JwCpUOh2yqz53F_zHnmJrNn4fmu6kETothOweh44SPVUGH2MgZ3ah2trwZRDMoSbT12Q62hxrMgdT2ptiBzdrCn_R_7h-AG7uaRU</recordid><startdate>20210401</startdate><enddate>20210401</enddate><creator>Kassir, Mohamad Mahdi</creator><creator>Palhang, Maziar</creator><creator>Ahmadzadeh, Mohammad Reza</creator><general>Springer Singapore</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0001-6001-3469</orcidid></search><sort><creationdate>20210401</creationdate><title>Two Efficient Visual Methods for Segment Self-localization</title><author>Kassir, Mohamad Mahdi ; Palhang, Maziar ; Ahmadzadeh, Mohammad Reza</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1150-6c1811de663a788ee600cd4f15127614e0ea93fa8eee3f8677b13b7bee0a71793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Computer Systems Organization and Communication Networks</topic><topic>Data analysis</topic><topic>Data Structures and Information Theory</topic><topic>Information Systems and Communication Service</topic><topic>Localization</topic><topic>Matching</topic><topic>Methods</topic><topic>Navigation</topic><topic>Original Research</topic><topic>Outliers (statistics)</topic><topic>Parameters</topic><topic>Pattern Recognition and Graphics</topic><topic>Projection model</topic><topic>Robotics</topic><topic>Robots</topic><topic>Segments</topic><topic>Software Engineering/Programming and Operating Systems</topic><topic>Teaching</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kassir, Mohamad Mahdi</creatorcontrib><creatorcontrib>Palhang, Maziar</creatorcontrib><creatorcontrib>Ahmadzadeh, Mohammad Reza</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>SN computer science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kassir, Mohamad Mahdi</au><au>Palhang, Maziar</au><au>Ahmadzadeh, Mohammad Reza</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Two Efficient Visual Methods for Segment Self-localization</atitle><jtitle>SN computer science</jtitle><stitle>SN COMPUT. SCI</stitle><date>2021-04-01</date><risdate>2021</risdate><volume>2</volume><issue>2</issue><spage>80</spage><pages>80-</pages><artnum>80</artnum><issn>2662-995X</issn><eissn>2661-8907</eissn><abstract>Localization is an essential step in visual navigation algorithms in robotics. Some visual navigation algorithms define the environment through sequential images, which are called visual path. The interval between each consecutive image is called a segment. One crucial step in these kinds of navigation is to find the segment in which the robot is placed (segment self-localization). Visual segment self-localization methods consist of two stages. In the first stage, a feature matching between the current image of the robot with all the images that form the visual path is done. Usually, in this stage, outliers removal methods such as RANSAC are used after matching to remove the outliers matched features. In the second stage, a segment is chosen depending on the results of the first one. Segment self-localization methods estimate a segment depending just on the percentage of the matched features. This leads to estimate the segment incorrectly in some cases. In this paper, another parameter also is considered to estimate the segment. The parameter is based on the perspective projection model. Moreover, instead of RANSAC which is a stochastic and time consuming method, a more straightforward and more effective method is proposed for outliers detection. The proposed methods are tested on Karlsruhe dataset, and acceptable results are obtained. Also, the methods are compared with three reviewed methods by Nguyen et al. (J Intell Robot Syst 84:217, 2016). Although the proposed methods use a more straightforward outlier method, they give more accurate results.</abstract><cop>Singapore</cop><pub>Springer Singapore</pub><doi>10.1007/s42979-021-00492-0</doi><orcidid>https://orcid.org/0000-0001-6001-3469</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2662-995X
ispartof SN computer science, 2021-04, Vol.2 (2), p.80, Article 80
issn 2662-995X
2661-8907
language eng
recordid cdi_proquest_journals_2938260210
source Springer Nature
subjects Algorithms
Computer Imaging
Computer Science
Computer Systems Organization and Communication Networks
Data analysis
Data Structures and Information Theory
Information Systems and Communication Service
Localization
Matching
Methods
Navigation
Original Research
Outliers (statistics)
Parameters
Pattern Recognition and Graphics
Projection model
Robotics
Robots
Segments
Software Engineering/Programming and Operating Systems
Teaching
Vision
title Two Efficient Visual Methods for Segment Self-localization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T23%3A23%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Two%20Efficient%20Visual%20Methods%20for%20Segment%20Self-localization&rft.jtitle=SN%20computer%20science&rft.au=Kassir,%20Mohamad%20Mahdi&rft.date=2021-04-01&rft.volume=2&rft.issue=2&rft.spage=80&rft.pages=80-&rft.artnum=80&rft.issn=2662-995X&rft.eissn=2661-8907&rft_id=info:doi/10.1007/s42979-021-00492-0&rft_dat=%3Cproquest_cross%3E2938260210%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c1150-6c1811de663a788ee600cd4f15127614e0ea93fa8eee3f8677b13b7bee0a71793%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2938260210&rft_id=info:pmid/&rfr_iscdi=true