Loading…

Robotic environmental state recognition with pre-trained vision-language models and black-box optimization

In order for robots to autonomously navigate and operate in diverse environments, it is essential for them to recognize the state of their environment. On the other hand, the environmental state recognition has traditionally involved distinct methods tailored to each state to be recognized. In this...

Full description

Saved in:
Bibliographic Details
Published in:Advanced robotics 2024-09, Vol.38 (18), p.1255-1264
Main Authors: Kawaharazuka, Kento, Obinata, Yoshiki, Kanazawa, Naoaki, Okada, Kei, Inaba, Masayuki
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c188t-ae6ae5cd98d1d537aaf43c6aa9706021ff2e9f4300d4d68a418c84b5d2be85c73
container_end_page 1264
container_issue 18
container_start_page 1255
container_title Advanced robotics
container_volume 38
creator Kawaharazuka, Kento
Obinata, Yoshiki
Kanazawa, Naoaki
Okada, Kei
Inaba, Masayuki
description In order for robots to autonomously navigate and operate in diverse environments, it is essential for them to recognize the state of their environment. On the other hand, the environmental state recognition has traditionally involved distinct methods tailored to each state to be recognized. In this study, we perform a unified environmental state recognition for robots through the spoken language with pre-trained large-scale vision-language models. We apply Visual Question Answering and Image-to-Text Retrieval, which are tasks of vision-language models. We show that with our method, it is possible to recognize not only whether a room door is open/closed, but also whether a transparent door is open/closed and whether water is running in a sink, without training neural networks or manual programing. In addition, the recognition accuracy can be improved by selecting appropriate texts from the set of prepared texts based on black-box optimization. For each state recognition, only the text set and its weighting need to be changed, eliminating the need to prepare multiple different models and programs, and facilitating the management of source code and computer resources. We experimentally demonstrate the effectiveness of our method and apply it to the recognition behavior on a mobile robot, Fetch.
doi_str_mv 10.1080/01691864.2024.2366995
format article
fullrecord <record><control><sourceid>crossref_infor</sourceid><recordid>TN_cdi_crossref_primary_10_1080_01691864_2024_2366995</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1080_01691864_2024_2366995</sourcerecordid><originalsourceid>FETCH-LOGICAL-c188t-ae6ae5cd98d1d537aaf43c6aa9706021ff2e9f4300d4d68a418c84b5d2be85c73</originalsourceid><addsrcrecordid>eNp9kM1KAzEUhYMoWKuPIOQFUpOZSZrZKcU_KAii63AnydTUmaQksbU-vTO0bt3cC4dzvsWH0DWjM0YlvaFM1EyKalbQYjilEHXNT9CEcSEJ5yU_RZOxQ8bSObpIaU0plVU5n6D1a2hCdhpbv3Ux-N76DB1OGbLF0eqw8i674PHO5Q-8iZbkCM5bg7cuDTnpwK--YGVxH4ztEgZvcNOB_iRN-MZhk13vfmBEXKKzFrpkr45_it4f7t8WT2T58vi8uFsSzaTMBKwAy7WppWGGl3OAtiq1AKjnVNCCtW1h6yGi1FRGSKiY1LJquCkaK7mel1PED1wdQ0rRtmoTXQ9xrxhVozD1J0yNwtRR2LC7Peycb0PsYRdiZ1SGfRdiG8Frl1T5P-IXYap1xg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Robotic environmental state recognition with pre-trained vision-language models and black-box optimization</title><source>Taylor and Francis Science and Technology Collection</source><creator>Kawaharazuka, Kento ; Obinata, Yoshiki ; Kanazawa, Naoaki ; Okada, Kei ; Inaba, Masayuki</creator><creatorcontrib>Kawaharazuka, Kento ; Obinata, Yoshiki ; Kanazawa, Naoaki ; Okada, Kei ; Inaba, Masayuki</creatorcontrib><description>In order for robots to autonomously navigate and operate in diverse environments, it is essential for them to recognize the state of their environment. On the other hand, the environmental state recognition has traditionally involved distinct methods tailored to each state to be recognized. In this study, we perform a unified environmental state recognition for robots through the spoken language with pre-trained large-scale vision-language models. We apply Visual Question Answering and Image-to-Text Retrieval, which are tasks of vision-language models. We show that with our method, it is possible to recognize not only whether a room door is open/closed, but also whether a transparent door is open/closed and whether water is running in a sink, without training neural networks or manual programing. In addition, the recognition accuracy can be improved by selecting appropriate texts from the set of prepared texts based on black-box optimization. For each state recognition, only the text set and its weighting need to be changed, eliminating the need to prepare multiple different models and programs, and facilitating the management of source code and computer resources. We experimentally demonstrate the effectiveness of our method and apply it to the recognition behavior on a mobile robot, Fetch.</description><identifier>ISSN: 0169-1864</identifier><identifier>EISSN: 1568-5535</identifier><identifier>DOI: 10.1080/01691864.2024.2366995</identifier><language>eng</language><publisher>Taylor &amp; Francis</publisher><subject>black-box optimization ; environmental state recognition ; Vision-language model</subject><ispartof>Advanced robotics, 2024-09, Vol.38 (18), p.1255-1264</ispartof><rights>2024 Informa UK Limited, trading as Taylor &amp; Francis Group and The Robotics Society of Japan 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c188t-ae6ae5cd98d1d537aaf43c6aa9706021ff2e9f4300d4d68a418c84b5d2be85c73</cites><orcidid>0000-0002-7464-7187 ; 0000-0003-1429-4401</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids></links><search><creatorcontrib>Kawaharazuka, Kento</creatorcontrib><creatorcontrib>Obinata, Yoshiki</creatorcontrib><creatorcontrib>Kanazawa, Naoaki</creatorcontrib><creatorcontrib>Okada, Kei</creatorcontrib><creatorcontrib>Inaba, Masayuki</creatorcontrib><title>Robotic environmental state recognition with pre-trained vision-language models and black-box optimization</title><title>Advanced robotics</title><description>In order for robots to autonomously navigate and operate in diverse environments, it is essential for them to recognize the state of their environment. On the other hand, the environmental state recognition has traditionally involved distinct methods tailored to each state to be recognized. In this study, we perform a unified environmental state recognition for robots through the spoken language with pre-trained large-scale vision-language models. We apply Visual Question Answering and Image-to-Text Retrieval, which are tasks of vision-language models. We show that with our method, it is possible to recognize not only whether a room door is open/closed, but also whether a transparent door is open/closed and whether water is running in a sink, without training neural networks or manual programing. In addition, the recognition accuracy can be improved by selecting appropriate texts from the set of prepared texts based on black-box optimization. For each state recognition, only the text set and its weighting need to be changed, eliminating the need to prepare multiple different models and programs, and facilitating the management of source code and computer resources. We experimentally demonstrate the effectiveness of our method and apply it to the recognition behavior on a mobile robot, Fetch.</description><subject>black-box optimization</subject><subject>environmental state recognition</subject><subject>Vision-language model</subject><issn>0169-1864</issn><issn>1568-5535</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kM1KAzEUhYMoWKuPIOQFUpOZSZrZKcU_KAii63AnydTUmaQksbU-vTO0bt3cC4dzvsWH0DWjM0YlvaFM1EyKalbQYjilEHXNT9CEcSEJ5yU_RZOxQ8bSObpIaU0plVU5n6D1a2hCdhpbv3Ux-N76DB1OGbLF0eqw8i674PHO5Q-8iZbkCM5bg7cuDTnpwK--YGVxH4ztEgZvcNOB_iRN-MZhk13vfmBEXKKzFrpkr45_it4f7t8WT2T58vi8uFsSzaTMBKwAy7WppWGGl3OAtiq1AKjnVNCCtW1h6yGi1FRGSKiY1LJquCkaK7mel1PED1wdQ0rRtmoTXQ9xrxhVozD1J0yNwtRR2LC7Peycb0PsYRdiZ1SGfRdiG8Frl1T5P-IXYap1xg</recordid><startdate>20240916</startdate><enddate>20240916</enddate><creator>Kawaharazuka, Kento</creator><creator>Obinata, Yoshiki</creator><creator>Kanazawa, Naoaki</creator><creator>Okada, Kei</creator><creator>Inaba, Masayuki</creator><general>Taylor &amp; Francis</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-7464-7187</orcidid><orcidid>https://orcid.org/0000-0003-1429-4401</orcidid></search><sort><creationdate>20240916</creationdate><title>Robotic environmental state recognition with pre-trained vision-language models and black-box optimization</title><author>Kawaharazuka, Kento ; Obinata, Yoshiki ; Kanazawa, Naoaki ; Okada, Kei ; Inaba, Masayuki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c188t-ae6ae5cd98d1d537aaf43c6aa9706021ff2e9f4300d4d68a418c84b5d2be85c73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>black-box optimization</topic><topic>environmental state recognition</topic><topic>Vision-language model</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kawaharazuka, Kento</creatorcontrib><creatorcontrib>Obinata, Yoshiki</creatorcontrib><creatorcontrib>Kanazawa, Naoaki</creatorcontrib><creatorcontrib>Okada, Kei</creatorcontrib><creatorcontrib>Inaba, Masayuki</creatorcontrib><collection>CrossRef</collection><jtitle>Advanced robotics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kawaharazuka, Kento</au><au>Obinata, Yoshiki</au><au>Kanazawa, Naoaki</au><au>Okada, Kei</au><au>Inaba, Masayuki</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robotic environmental state recognition with pre-trained vision-language models and black-box optimization</atitle><jtitle>Advanced robotics</jtitle><date>2024-09-16</date><risdate>2024</risdate><volume>38</volume><issue>18</issue><spage>1255</spage><epage>1264</epage><pages>1255-1264</pages><issn>0169-1864</issn><eissn>1568-5535</eissn><abstract>In order for robots to autonomously navigate and operate in diverse environments, it is essential for them to recognize the state of their environment. On the other hand, the environmental state recognition has traditionally involved distinct methods tailored to each state to be recognized. In this study, we perform a unified environmental state recognition for robots through the spoken language with pre-trained large-scale vision-language models. We apply Visual Question Answering and Image-to-Text Retrieval, which are tasks of vision-language models. We show that with our method, it is possible to recognize not only whether a room door is open/closed, but also whether a transparent door is open/closed and whether water is running in a sink, without training neural networks or manual programing. In addition, the recognition accuracy can be improved by selecting appropriate texts from the set of prepared texts based on black-box optimization. For each state recognition, only the text set and its weighting need to be changed, eliminating the need to prepare multiple different models and programs, and facilitating the management of source code and computer resources. We experimentally demonstrate the effectiveness of our method and apply it to the recognition behavior on a mobile robot, Fetch.</abstract><pub>Taylor &amp; Francis</pub><doi>10.1080/01691864.2024.2366995</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0002-7464-7187</orcidid><orcidid>https://orcid.org/0000-0003-1429-4401</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0169-1864
ispartof Advanced robotics, 2024-09, Vol.38 (18), p.1255-1264
issn 0169-1864
1568-5535
language eng
recordid cdi_crossref_primary_10_1080_01691864_2024_2366995
source Taylor and Francis Science and Technology Collection
subjects black-box optimization
environmental state recognition
Vision-language model
title Robotic environmental state recognition with pre-trained vision-language models and black-box optimization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T09%3A44%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_infor&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robotic%20environmental%20state%20recognition%20with%20pre-trained%20vision-language%20models%20and%20black-box%20optimization&rft.jtitle=Advanced%20robotics&rft.au=Kawaharazuka,%20Kento&rft.date=2024-09-16&rft.volume=38&rft.issue=18&rft.spage=1255&rft.epage=1264&rft.pages=1255-1264&rft.issn=0169-1864&rft.eissn=1568-5535&rft_id=info:doi/10.1080/01691864.2024.2366995&rft_dat=%3Ccrossref_infor%3E10_1080_01691864_2024_2366995%3C/crossref_infor%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c188t-ae6ae5cd98d1d537aaf43c6aa9706021ff2e9f4300d4d68a418c84b5d2be85c73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true