Loading…
ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose...
Saved in:
Published in: | IEEE transactions on pattern analysis and machine intelligence 2024-04, Vol.PP, p.1-16 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 16 |
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | PP |
creator | An, Dong Wang, Hanqing Wang, Wenguan Wang, Zun Huang, Yan He, Keji Wang, Liang |
description | Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose to address a more practical yet challenging counterpart setting - vision-language navigation in continuous environments (VLN-CE). To develop a robust VLN-CE agent, we propose a new navigation framework, ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments. ETPNav performs online topological mapping of environments by self-organizing predicted waypoints along a traversed path, without prior environmental experience. It privileges the agent to break down the navigation procedure into high-level planning and low-level control. Concurrently, ETPNav utilizes a transformer-based cross-modal planner to generate navigation plans based on topological maps and instructions. The plan is then performed through an obstacle-avoiding controller that leverages a trial-and-error heuristic to prevent navigation from getting stuck in obstacles. Experimental results demonstrate the effectiveness of the proposed method. ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is available at https://github.com/MarSaKi/ETPNav . |
doi_str_mv | 10.1109/TPAMI.2024.3386695 |
format | article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_38593013</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10495141</ieee_id><sourcerecordid>3035536961</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2345-1e85fd2076e865a20935734843cf5133de94957725ba67a15e45ccb07bf90de63</originalsourceid><addsrcrecordid>eNpNkFFLwzAUhYMobk7_gIj00ZfOJLdJG99kVB1M3UP1NaRtWiJdMpu24L-3c1N8unA453Duh9AlwXNCsLjN1vfPyznFNJoDJJwLdoSmRIAIgYE4RlNMOA2ThCYTdOb9B8YkYhhO0QQSJgATmCKZZusXNdwF6eCawdg6yNzWNa42hWqCdaOs3YmVa4N3442z4UrZule1DsaYqVU3aoGxwcLZztje9T5I7WBaZzfadv4cnVSq8fricGfo7SHNFk_h6vVxubhfhQWFiIVEJ6wqKY65TjhTFAtgMURJBEXFCECpRSRYHFOWKx4rwnTEiiLHcV4JXGoOM3Sz79227rPXvpMb4wvdjA_ocZMEDIwBF5yMVrq3Fq3zvtWV3LZmo9ovSbDcgZU_YOUOrDyAHUPXh_4-3-jyL_JLcjRc7Q1Ga_2vcZxNIgLfI7R8sA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3035536961</pqid></control><display><type>article</type><title>ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments</title><source>IEEE Electronic Library (IEL) Journals</source><creator>An, Dong ; Wang, Hanqing ; Wang, Wenguan ; Wang, Zun ; Huang, Yan ; He, Keji ; Wang, Liang</creator><creatorcontrib>An, Dong ; Wang, Hanqing ; Wang, Wenguan ; Wang, Zun ; Huang, Yan ; He, Keji ; Wang, Liang</creatorcontrib><description>Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose to address a more practical yet challenging counterpart setting - vision-language navigation in continuous environments (VLN-CE). To develop a robust VLN-CE agent, we propose a new navigation framework, ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments. ETPNav performs online topological mapping of environments by self-organizing predicted waypoints along a traversed path, without prior environmental experience. It privileges the agent to break down the navigation procedure into high-level planning and low-level control. Concurrently, ETPNav utilizes a transformer-based cross-modal planner to generate navigation plans based on topological maps and instructions. The plan is then performed through an obstacle-avoiding controller that leverages a trial-and-error heuristic to prevent navigation from getting stuck in obstacles. Experimental results demonstrate the effectiveness of the proposed method. ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is available at https://github.com/MarSaKi/ETPNav .</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2024.3386695</identifier><identifier>PMID: 38593013</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Layout ; Measurement ; Navigation ; Obstacle Avoidance ; Planning ; Semantics ; Task analysis ; Topological Map ; Transformers ; Vision-Language Navigation</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2024-04, Vol.PP, p.1-16</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-1347-8535 ; 0000-0001-5224-8647 ; 0000-0001-5136-8444 ; 0000-0002-8239-7229 ; 0000-0002-0802-9567</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10495141$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38593013$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>An, Dong</creatorcontrib><creatorcontrib>Wang, Hanqing</creatorcontrib><creatorcontrib>Wang, Wenguan</creatorcontrib><creatorcontrib>Wang, Zun</creatorcontrib><creatorcontrib>Huang, Yan</creatorcontrib><creatorcontrib>He, Keji</creatorcontrib><creatorcontrib>Wang, Liang</creatorcontrib><title>ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose to address a more practical yet challenging counterpart setting - vision-language navigation in continuous environments (VLN-CE). To develop a robust VLN-CE agent, we propose a new navigation framework, ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments. ETPNav performs online topological mapping of environments by self-organizing predicted waypoints along a traversed path, without prior environmental experience. It privileges the agent to break down the navigation procedure into high-level planning and low-level control. Concurrently, ETPNav utilizes a transformer-based cross-modal planner to generate navigation plans based on topological maps and instructions. The plan is then performed through an obstacle-avoiding controller that leverages a trial-and-error heuristic to prevent navigation from getting stuck in obstacles. Experimental results demonstrate the effectiveness of the proposed method. ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is available at https://github.com/MarSaKi/ETPNav .</description><subject>Layout</subject><subject>Measurement</subject><subject>Navigation</subject><subject>Obstacle Avoidance</subject><subject>Planning</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Topological Map</subject><subject>Transformers</subject><subject>Vision-Language Navigation</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkFFLwzAUhYMobk7_gIj00ZfOJLdJG99kVB1M3UP1NaRtWiJdMpu24L-3c1N8unA453Duh9AlwXNCsLjN1vfPyznFNJoDJJwLdoSmRIAIgYE4RlNMOA2ThCYTdOb9B8YkYhhO0QQSJgATmCKZZusXNdwF6eCawdg6yNzWNa42hWqCdaOs3YmVa4N3442z4UrZule1DsaYqVU3aoGxwcLZztje9T5I7WBaZzfadv4cnVSq8fricGfo7SHNFk_h6vVxubhfhQWFiIVEJ6wqKY65TjhTFAtgMURJBEXFCECpRSRYHFOWKx4rwnTEiiLHcV4JXGoOM3Sz79227rPXvpMb4wvdjA_ocZMEDIwBF5yMVrq3Fq3zvtWV3LZmo9ovSbDcgZU_YOUOrDyAHUPXh_4-3-jyL_JLcjRc7Q1Ga_2vcZxNIgLfI7R8sA</recordid><startdate>20240409</startdate><enddate>20240409</enddate><creator>An, Dong</creator><creator>Wang, Hanqing</creator><creator>Wang, Wenguan</creator><creator>Wang, Zun</creator><creator>Huang, Yan</creator><creator>He, Keji</creator><creator>Wang, Liang</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1347-8535</orcidid><orcidid>https://orcid.org/0000-0001-5224-8647</orcidid><orcidid>https://orcid.org/0000-0001-5136-8444</orcidid><orcidid>https://orcid.org/0000-0002-8239-7229</orcidid><orcidid>https://orcid.org/0000-0002-0802-9567</orcidid></search><sort><creationdate>20240409</creationdate><title>ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments</title><author>An, Dong ; Wang, Hanqing ; Wang, Wenguan ; Wang, Zun ; Huang, Yan ; He, Keji ; Wang, Liang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2345-1e85fd2076e865a20935734843cf5133de94957725ba67a15e45ccb07bf90de63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Layout</topic><topic>Measurement</topic><topic>Navigation</topic><topic>Obstacle Avoidance</topic><topic>Planning</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Topological Map</topic><topic>Transformers</topic><topic>Vision-Language Navigation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>An, Dong</creatorcontrib><creatorcontrib>Wang, Hanqing</creatorcontrib><creatorcontrib>Wang, Wenguan</creatorcontrib><creatorcontrib>Wang, Zun</creatorcontrib><creatorcontrib>Huang, Yan</creatorcontrib><creatorcontrib>He, Keji</creatorcontrib><creatorcontrib>Wang, Liang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>An, Dong</au><au>Wang, Hanqing</au><au>Wang, Wenguan</au><au>Wang, Zun</au><au>Huang, Yan</au><au>He, Keji</au><au>Wang, Liang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2024-04-09</date><risdate>2024</risdate><volume>PP</volume><spage>1</spage><epage>16</epage><pages>1-16</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose to address a more practical yet challenging counterpart setting - vision-language navigation in continuous environments (VLN-CE). To develop a robust VLN-CE agent, we propose a new navigation framework, ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments. ETPNav performs online topological mapping of environments by self-organizing predicted waypoints along a traversed path, without prior environmental experience. It privileges the agent to break down the navigation procedure into high-level planning and low-level control. Concurrently, ETPNav utilizes a transformer-based cross-modal planner to generate navigation plans based on topological maps and instructions. The plan is then performed through an obstacle-avoiding controller that leverages a trial-and-error heuristic to prevent navigation from getting stuck in obstacles. Experimental results demonstrate the effectiveness of the proposed method. ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is available at https://github.com/MarSaKi/ETPNav .</abstract><cop>United States</cop><pub>IEEE</pub><pmid>38593013</pmid><doi>10.1109/TPAMI.2024.3386695</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-1347-8535</orcidid><orcidid>https://orcid.org/0000-0001-5224-8647</orcidid><orcidid>https://orcid.org/0000-0001-5136-8444</orcidid><orcidid>https://orcid.org/0000-0002-8239-7229</orcidid><orcidid>https://orcid.org/0000-0002-0802-9567</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2024-04, Vol.PP, p.1-16 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_pubmed_primary_38593013 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Layout Measurement Navigation Obstacle Avoidance Planning Semantics Task analysis Topological Map Transformers Vision-Language Navigation |
title | ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T18%3A47%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ETPNav:%20Evolving%20Topological%20Planning%20for%20Vision-Language%20Navigation%20in%20Continuous%20Environments&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=An,%20Dong&rft.date=2024-04-09&rft.volume=PP&rft.spage=1&rft.epage=16&rft.pages=1-16&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2024.3386695&rft_dat=%3Cproquest_pubme%3E3035536961%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c2345-1e85fd2076e865a20935734843cf5133de94957725ba67a15e45ccb07bf90de63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3035536961&rft_id=info:pmid/38593013&rft_ieee_id=10495141&rfr_iscdi=true |