Loading…

NOLO: Navigate Only Look Once

The in-context learning ability of Transformer models has brought new possibilities to visual navigation. In this paper, we focus on the video navigation setting, where an in-context navigation policy needs to be learned purely from videos in an offline manner, without access to the actual environme...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-11
Main Authors: Bohan Zhou, Zhang, Zhongbin, Wang, Jiangxing, Lu, Zongqing
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Bohan Zhou
Zhang, Zhongbin
Wang, Jiangxing
Lu, Zongqing
description The in-context learning ability of Transformer models has brought new possibilities to visual navigation. In this paper, we focus on the video navigation setting, where an in-context navigation policy needs to be learned purely from videos in an offline manner, without access to the actual environment. For this setting, we propose Navigate Only Look Once (NOLO), a method for learning a navigation policy that possesses the in-context ability and adapts to new scenes by taking corresponding context videos as input without finetuning or re-training. To enable learning from videos, we first propose a pseudo action labeling procedure using optical flow to recover the action label from egocentric videos. Then, offline reinforcement learning is applied to learn the navigation policy. Through extensive experiments on different scenes both in simulation and the real world, we show that our algorithm outperforms baselines by a large margin, which demonstrates the in-context learning ability of the learned policy. For videos and more information, visit https://sites.google.com/view/nol0.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3088983665</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3088983665</sourcerecordid><originalsourceid>FETCH-proquest_journals_30889836653</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQ9fP38bdS8Essy0xPLElV8M_LqVTwyc_PBrKSU3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7YwMLC0sLYzMzUmDhVAN_lKu8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3088983665</pqid></control><display><type>article</type><title>NOLO: Navigate Only Look Once</title><source>Publicly Available Content Database</source><creator>Bohan Zhou ; Zhang, Zhongbin ; Wang, Jiangxing ; Lu, Zongqing</creator><creatorcontrib>Bohan Zhou ; Zhang, Zhongbin ; Wang, Jiangxing ; Lu, Zongqing</creatorcontrib><description>The in-context learning ability of Transformer models has brought new possibilities to visual navigation. In this paper, we focus on the video navigation setting, where an in-context navigation policy needs to be learned purely from videos in an offline manner, without access to the actual environment. For this setting, we propose Navigate Only Look Once (NOLO), a method for learning a navigation policy that possesses the in-context ability and adapts to new scenes by taking corresponding context videos as input without finetuning or re-training. To enable learning from videos, we first propose a pseudo action labeling procedure using optical flow to recover the action label from egocentric videos. Then, offline reinforcement learning is applied to learn the navigation policy. Through extensive experiments on different scenes both in simulation and the real world, we show that our algorithm outperforms baselines by a large margin, which demonstrates the in-context learning ability of the learned policy. For videos and more information, visit https://sites.google.com/view/nol0.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Context ; Labels ; Machine learning ; Navigation ; Optical flow (image analysis) ; Video</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3088983665?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25728,36986,44563</link.rule.ids></links><search><creatorcontrib>Bohan Zhou</creatorcontrib><creatorcontrib>Zhang, Zhongbin</creatorcontrib><creatorcontrib>Wang, Jiangxing</creatorcontrib><creatorcontrib>Lu, Zongqing</creatorcontrib><title>NOLO: Navigate Only Look Once</title><title>arXiv.org</title><description>The in-context learning ability of Transformer models has brought new possibilities to visual navigation. In this paper, we focus on the video navigation setting, where an in-context navigation policy needs to be learned purely from videos in an offline manner, without access to the actual environment. For this setting, we propose Navigate Only Look Once (NOLO), a method for learning a navigation policy that possesses the in-context ability and adapts to new scenes by taking corresponding context videos as input without finetuning or re-training. To enable learning from videos, we first propose a pseudo action labeling procedure using optical flow to recover the action label from egocentric videos. Then, offline reinforcement learning is applied to learn the navigation policy. Through extensive experiments on different scenes both in simulation and the real world, we show that our algorithm outperforms baselines by a large margin, which demonstrates the in-context learning ability of the learned policy. For videos and more information, visit https://sites.google.com/view/nol0.</description><subject>Algorithms</subject><subject>Context</subject><subject>Labels</subject><subject>Machine learning</subject><subject>Navigation</subject><subject>Optical flow (image analysis)</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQ9fP38bdS8Essy0xPLElV8M_LqVTwyc_PBrKSU3kYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7YwMLC0sLYzMzUmDhVAN_lKu8</recordid><startdate>20241116</startdate><enddate>20241116</enddate><creator>Bohan Zhou</creator><creator>Zhang, Zhongbin</creator><creator>Wang, Jiangxing</creator><creator>Lu, Zongqing</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241116</creationdate><title>NOLO: Navigate Only Look Once</title><author>Bohan Zhou ; Zhang, Zhongbin ; Wang, Jiangxing ; Lu, Zongqing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30889836653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Context</topic><topic>Labels</topic><topic>Machine learning</topic><topic>Navigation</topic><topic>Optical flow (image analysis)</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Bohan Zhou</creatorcontrib><creatorcontrib>Zhang, Zhongbin</creatorcontrib><creatorcontrib>Wang, Jiangxing</creatorcontrib><creatorcontrib>Lu, Zongqing</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bohan Zhou</au><au>Zhang, Zhongbin</au><au>Wang, Jiangxing</au><au>Lu, Zongqing</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>NOLO: Navigate Only Look Once</atitle><jtitle>arXiv.org</jtitle><date>2024-11-16</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The in-context learning ability of Transformer models has brought new possibilities to visual navigation. In this paper, we focus on the video navigation setting, where an in-context navigation policy needs to be learned purely from videos in an offline manner, without access to the actual environment. For this setting, we propose Navigate Only Look Once (NOLO), a method for learning a navigation policy that possesses the in-context ability and adapts to new scenes by taking corresponding context videos as input without finetuning or re-training. To enable learning from videos, we first propose a pseudo action labeling procedure using optical flow to recover the action label from egocentric videos. Then, offline reinforcement learning is applied to learn the navigation policy. Through extensive experiments on different scenes both in simulation and the real world, we show that our algorithm outperforms baselines by a large margin, which demonstrates the in-context learning ability of the learned policy. For videos and more information, visit https://sites.google.com/view/nol0.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3088983665
source Publicly Available Content Database
subjects Algorithms
Context
Labels
Machine learning
Navigation
Optical flow (image analysis)
Video
title NOLO: Navigate Only Look Once
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-25T19%3A35%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=NOLO:%20Navigate%20Only%20Look%20Once&rft.jtitle=arXiv.org&rft.au=Bohan%20Zhou&rft.date=2024-11-16&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3088983665%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30889836653%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3088983665&rft_id=info:pmid/&rfr_iscdi=true