Loading…

Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments

This work presents the development and field testing of a novel adaptive visual information gathering (AVIG) framework for autonomous exploration of benthic environments using AUVs. The objective is to adapt dynamically the robot exploration using the visual information gathered online . This framew...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2021, Vol.9, p.136487-136506
Main Authors: Guerrero, Eric, Bonin-Font, Francisco, Oliver, Gabriel
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c408t-f5b99bea5c1ee962f463312b37754cb7151caf85ed7da6eb00aea3f4d2591c3f3
cites cdi_FETCH-LOGICAL-c408t-f5b99bea5c1ee962f463312b37754cb7151caf85ed7da6eb00aea3f4d2591c3f3
container_end_page 136506
container_issue
container_start_page 136487
container_title IEEE access
container_volume 9
creator Guerrero, Eric
Bonin-Font, Francisco
Oliver, Gabriel
description This work presents the development and field testing of a novel adaptive visual information gathering (AVIG) framework for autonomous exploration of benthic environments using AUVs. The objective is to adapt dynamically the robot exploration using the visual information gathered online . This framework is based on a novel decision-time adaptive replanning (DAR) behavior that works together with a sparse Gaussian process (SGP) for environmental modeling and a Convolutional Neural Network (CNN) for semantic image segmentation. The framework is executed in mission time. The SGP uses semantic data obtained from stereo images to probabilistically model the spatial distribution of certain species of seagrass that colonize the sea bottom forming widespread meadows. The uncertainty of the probabilistic model provides a measure of sampling informativeness to the DAR behavior. The DAR behavior has been designed to execute successive informative paths, without stopping, considering the newest information obtained from the SGP. We solve the information path planning (IPP) problem by means of a novel depth-first (DF) version of the Monte Carlo tree search (MCTS). The DF-MCTS method has been designed to explore the state-space in a depth-first fashion, provide solution paths of a given length in an anytime manner, and reward smooth paths for field realization with non-holonomic robots. The complete framework has been integrated in a ROS environment as a high level layer of the AUV software architecture. A set of simulations and field testing show the effectiveness of the framework to gather data in P. oceanica environments.
doi_str_mv 10.1109/ACCESS.2021.3117343
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2581569524</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9557264</ieee_id><doaj_id>oai_doaj_org_article_37f303a5da7542b69936d228df7ecd42</doaj_id><sourcerecordid>2581569524</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-f5b99bea5c1ee962f463312b37754cb7151caf85ed7da6eb00aea3f4d2591c3f3</originalsourceid><addsrcrecordid>eNpNUcFuEzEQXSGQqEq_oJeVOCfYnvV6fYyiUCJV4lDKhYM1ux4XR4kdbG-hf4_LVhVzmdHTe29G85rmmrM150x_2my3u7u7tWCCr4FzBR28aS4E7_UKJPRv_5vfN1c5H1itoUJSXTQ_NhbPxT9S-93nGY_tPriYTlh8DO0Nlp-UfHhoK9Zu5hJDPMU5t7s_52NMCym69j5YSr-xUGp34dGnGE4USv7QvHN4zHT10i-b-8-7b9svq9uvN_vt5nY1dWwoKydHrUdCOXEi3QvX9QBcjKCU7KZRcckndIMkqyz2NDKGhOA6K6TmEzi4bPaLr414MOfkT5ieTERv_gExPRhMxU9HMqAcMEBpsXqLsdcaeivEYJ2iyXaien1cvM4p_popF3OIcwr1fCPkwGX9mugqCxbWlGLOidzrVs7McyhmCcU8h2JeQqmq60XliehVoaVUou_gL_8oiYY</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2581569524</pqid></control><display><type>article</type><title>Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments</title><source>IEEE Xplore Open Access Journals</source><creator>Guerrero, Eric ; Bonin-Font, Francisco ; Oliver, Gabriel</creator><creatorcontrib>Guerrero, Eric ; Bonin-Font, Francisco ; Oliver, Gabriel</creatorcontrib><description>This work presents the development and field testing of a novel adaptive visual information gathering (AVIG) framework for autonomous exploration of benthic environments using AUVs. The objective is to adapt dynamically the robot exploration using the visual information gathered online . This framework is based on a novel decision-time adaptive replanning (DAR) behavior that works together with a sparse Gaussian process (SGP) for environmental modeling and a Convolutional Neural Network (CNN) for semantic image segmentation. The framework is executed in mission time. The SGP uses semantic data obtained from stereo images to probabilistically model the spatial distribution of certain species of seagrass that colonize the sea bottom forming widespread meadows. The uncertainty of the probabilistic model provides a measure of sampling informativeness to the DAR behavior. The DAR behavior has been designed to execute successive informative paths, without stopping, considering the newest information obtained from the SGP. We solve the information path planning (IPP) problem by means of a novel depth-first (DF) version of the Monte Carlo tree search (MCTS). The DF-MCTS method has been designed to explore the state-space in a depth-first fashion, provide solution paths of a given length in an anytime manner, and reward smooth paths for field realization with non-holonomic robots. The complete framework has been integrated in a ROS environment as a high level layer of the AUV software architecture. A set of simulations and field testing show the effectiveness of the framework to gather data in P. oceanica environments.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2021.3117343</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adaptation models ; Adaptive ; Artificial neural networks ; Autonomous underwater vehicles ; Computer architecture ; Data models ; Environment models ; Estimation ; Exploration ; Field study ; Gaussian process ; Image segmentation ; information ; marine ; Path planning ; Planning ; posidonia ; Probabilistic models ; robotics ; Robots ; semantic ; Semantics ; Spatial distribution ; Uncertainty ; underwater</subject><ispartof>IEEE access, 2021, Vol.9, p.136487-136506</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-f5b99bea5c1ee962f463312b37754cb7151caf85ed7da6eb00aea3f4d2591c3f3</citedby><cites>FETCH-LOGICAL-c408t-f5b99bea5c1ee962f463312b37754cb7151caf85ed7da6eb00aea3f4d2591c3f3</cites><orcidid>0000-0002-4604-205X ; 0000-0003-1425-6907 ; 0000-0001-6910-1940</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9557264$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,4021,27631,27921,27922,27923,54931</link.rule.ids></links><search><creatorcontrib>Guerrero, Eric</creatorcontrib><creatorcontrib>Bonin-Font, Francisco</creatorcontrib><creatorcontrib>Oliver, Gabriel</creatorcontrib><title>Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments</title><title>IEEE access</title><addtitle>Access</addtitle><description>This work presents the development and field testing of a novel adaptive visual information gathering (AVIG) framework for autonomous exploration of benthic environments using AUVs. The objective is to adapt dynamically the robot exploration using the visual information gathered online . This framework is based on a novel decision-time adaptive replanning (DAR) behavior that works together with a sparse Gaussian process (SGP) for environmental modeling and a Convolutional Neural Network (CNN) for semantic image segmentation. The framework is executed in mission time. The SGP uses semantic data obtained from stereo images to probabilistically model the spatial distribution of certain species of seagrass that colonize the sea bottom forming widespread meadows. The uncertainty of the probabilistic model provides a measure of sampling informativeness to the DAR behavior. The DAR behavior has been designed to execute successive informative paths, without stopping, considering the newest information obtained from the SGP. We solve the information path planning (IPP) problem by means of a novel depth-first (DF) version of the Monte Carlo tree search (MCTS). The DF-MCTS method has been designed to explore the state-space in a depth-first fashion, provide solution paths of a given length in an anytime manner, and reward smooth paths for field realization with non-holonomic robots. The complete framework has been integrated in a ROS environment as a high level layer of the AUV software architecture. A set of simulations and field testing show the effectiveness of the framework to gather data in P. oceanica environments.</description><subject>Adaptation models</subject><subject>Adaptive</subject><subject>Artificial neural networks</subject><subject>Autonomous underwater vehicles</subject><subject>Computer architecture</subject><subject>Data models</subject><subject>Environment models</subject><subject>Estimation</subject><subject>Exploration</subject><subject>Field study</subject><subject>Gaussian process</subject><subject>Image segmentation</subject><subject>information</subject><subject>marine</subject><subject>Path planning</subject><subject>Planning</subject><subject>posidonia</subject><subject>Probabilistic models</subject><subject>robotics</subject><subject>Robots</subject><subject>semantic</subject><subject>Semantics</subject><subject>Spatial distribution</subject><subject>Uncertainty</subject><subject>underwater</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNpNUcFuEzEQXSGQqEq_oJeVOCfYnvV6fYyiUCJV4lDKhYM1ux4XR4kdbG-hf4_LVhVzmdHTe29G85rmmrM150x_2my3u7u7tWCCr4FzBR28aS4E7_UKJPRv_5vfN1c5H1itoUJSXTQ_NhbPxT9S-93nGY_tPriYTlh8DO0Nlp-UfHhoK9Zu5hJDPMU5t7s_52NMCym69j5YSr-xUGp34dGnGE4USv7QvHN4zHT10i-b-8-7b9svq9uvN_vt5nY1dWwoKydHrUdCOXEi3QvX9QBcjKCU7KZRcckndIMkqyz2NDKGhOA6K6TmEzi4bPaLr414MOfkT5ieTERv_gExPRhMxU9HMqAcMEBpsXqLsdcaeivEYJ2iyXaien1cvM4p_popF3OIcwr1fCPkwGX9mugqCxbWlGLOidzrVs7McyhmCcU8h2JeQqmq60XliehVoaVUou_gL_8oiYY</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Guerrero, Eric</creator><creator>Bonin-Font, Francisco</creator><creator>Oliver, Gabriel</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-4604-205X</orcidid><orcidid>https://orcid.org/0000-0003-1425-6907</orcidid><orcidid>https://orcid.org/0000-0001-6910-1940</orcidid></search><sort><creationdate>2021</creationdate><title>Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments</title><author>Guerrero, Eric ; Bonin-Font, Francisco ; Oliver, Gabriel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-f5b99bea5c1ee962f463312b37754cb7151caf85ed7da6eb00aea3f4d2591c3f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptation models</topic><topic>Adaptive</topic><topic>Artificial neural networks</topic><topic>Autonomous underwater vehicles</topic><topic>Computer architecture</topic><topic>Data models</topic><topic>Environment models</topic><topic>Estimation</topic><topic>Exploration</topic><topic>Field study</topic><topic>Gaussian process</topic><topic>Image segmentation</topic><topic>information</topic><topic>marine</topic><topic>Path planning</topic><topic>Planning</topic><topic>posidonia</topic><topic>Probabilistic models</topic><topic>robotics</topic><topic>Robots</topic><topic>semantic</topic><topic>Semantics</topic><topic>Spatial distribution</topic><topic>Uncertainty</topic><topic>underwater</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Guerrero, Eric</creatorcontrib><creatorcontrib>Bonin-Font, Francisco</creatorcontrib><creatorcontrib>Oliver, Gabriel</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Guerrero, Eric</au><au>Bonin-Font, Francisco</au><au>Oliver, Gabriel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2021</date><risdate>2021</risdate><volume>9</volume><spage>136487</spage><epage>136506</epage><pages>136487-136506</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>This work presents the development and field testing of a novel adaptive visual information gathering (AVIG) framework for autonomous exploration of benthic environments using AUVs. The objective is to adapt dynamically the robot exploration using the visual information gathered online . This framework is based on a novel decision-time adaptive replanning (DAR) behavior that works together with a sparse Gaussian process (SGP) for environmental modeling and a Convolutional Neural Network (CNN) for semantic image segmentation. The framework is executed in mission time. The SGP uses semantic data obtained from stereo images to probabilistically model the spatial distribution of certain species of seagrass that colonize the sea bottom forming widespread meadows. The uncertainty of the probabilistic model provides a measure of sampling informativeness to the DAR behavior. The DAR behavior has been designed to execute successive informative paths, without stopping, considering the newest information obtained from the SGP. We solve the information path planning (IPP) problem by means of a novel depth-first (DF) version of the Monte Carlo tree search (MCTS). The DF-MCTS method has been designed to explore the state-space in a depth-first fashion, provide solution paths of a given length in an anytime manner, and reward smooth paths for field realization with non-holonomic robots. The complete framework has been integrated in a ROS environment as a high level layer of the AUV software architecture. A set of simulations and field testing show the effectiveness of the framework to gather data in P. oceanica environments.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2021.3117343</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0002-4604-205X</orcidid><orcidid>https://orcid.org/0000-0003-1425-6907</orcidid><orcidid>https://orcid.org/0000-0001-6910-1940</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2021, Vol.9, p.136487-136506
issn 2169-3536
2169-3536
language eng
recordid cdi_proquest_journals_2581569524
source IEEE Xplore Open Access Journals
subjects Adaptation models
Adaptive
Artificial neural networks
Autonomous underwater vehicles
Computer architecture
Data models
Environment models
Estimation
Exploration
Field study
Gaussian process
Image segmentation
information
marine
Path planning
Planning
posidonia
Probabilistic models
robotics
Robots
semantic
Semantics
Spatial distribution
Uncertainty
underwater
title Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T23%3A58%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Visual%20Information%20Gathering%20for%20Autonomous%20Exploration%20of%20Underwater%20Environments&rft.jtitle=IEEE%20access&rft.au=Guerrero,%20Eric&rft.date=2021&rft.volume=9&rft.spage=136487&rft.epage=136506&rft.pages=136487-136506&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2021.3117343&rft_dat=%3Cproquest_cross%3E2581569524%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c408t-f5b99bea5c1ee962f463312b37754cb7151caf85ed7da6eb00aea3f4d2591c3f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2581569524&rft_id=info:pmid/&rft_ieee_id=9557264&rfr_iscdi=true