Loading…

Active Vision via Extremum Seeking for Robots in Unstructured Environments: Applications in Object Recognition and Manipulation

In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an e...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automation science and engineering 2018-10, Vol.15 (4), p.1810-1822
Main Authors: Calli, Berk, Caarls, Wouter, Wisse, Martijn, Jonker, Pieter P.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c450t-7cf6c2d457f411ffcf259e801f140fcff7ca2dc4effe53a53209a100e0aae25a3
cites cdi_FETCH-LOGICAL-c450t-7cf6c2d457f411ffcf259e801f140fcff7ca2dc4effe53a53209a100e0aae25a3
container_end_page 1822
container_issue 4
container_start_page 1810
container_title IEEE transactions on automation science and engineering
container_volume 15
creator Calli, Berk
Caarls, Wouter
Wisse, Martijn
Jonker, Pieter P.
description In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an explicit objective function or any other task model to calculate the gradient direction for viewpoint optimization. This brings new possibilities for the use of active vision in unstructured environments, since a priori knowledge of the surroundings and the target objects is not required. 2) ESC conducts continuous optimization backed up with mechanisms to escape from local maxima. This enables an efficient execution of an active vision task. We demonstrate our approach with two applications in the object recognition and manipulation fields, where the model-free approach brings various benefits: for object recognition, our framework removes the dependence on offline training data for viewpoint optimization, and provides robustness of the system to occlusions and changing lighting conditions. In object manipulation, the model-free approach allows us to increase the success rate of a grasp synthesis algorithm without the need of an object model; the algorithm only uses continuous measurements of the objective value, i.e., the grasp quality. Our experiments show that continuous viewpoint optimization can efficiently increase the data quality for the underlying algorithm, while maintaining the robustness. Note to Practitioners -Vision sensors provide robots flexibility and robustness both in industrial and domestic settings by supplying required data to analyze the surroundings and the state of the task. However, the quality of these data can be very high or poor depending on the viewing angle of the vision sensor. For example, if the robot aims to recognize an object, images taken from certain angles (e.g., feature rich surfaces) can be more descriptive than the others, or if the robot's goal is to manipulate an object, observing it from a viewpoint that reveals easy-to-grasp "handles" makes the task simpler to execute. The algorithm presented in this paper aims to provide the robot high quality visual data relative to the task at hand by changing vision sensors' viewpoint. Different from other methods in the literature, our method does not require any task models (therefore, it is model free), and only utilizes a quality value that can be measured from the current viewp
doi_str_mv 10.1109/TASE.2018.2807787
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8310020</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8310020</ieee_id><sourcerecordid>2117188079</sourcerecordid><originalsourceid>FETCH-LOGICAL-c450t-7cf6c2d457f411ffcf259e801f140fcff7ca2dc4effe53a53209a100e0aae25a3</originalsourceid><addsrcrecordid>eNo9kF1LwzAUhosoOKc_QLwJeN2ZpI1NvStjfsBksA9vS5aejMw1qUk69Mq_brsNr84HzzkvPFF0S_CIEJw_LIvFZEQx4SPKcZbx7CwaEMZ4nGQ8Oe_7lMUsZ-wyuvJ-izFNeY4H0W8hg94D-tBeW4P2WqDJd3BQtzVaAHxqs0HKOjS3axs80gatjA-ulaF1UKGJ2WtnTQ0m-CdUNM1OSxG6Twd0tt6CDGgO0m6M7tdImAq9C6ObdnfgrqMLJXYebk51GK2eJ8vxazydvbyNi2ksU4ZDnEn1KGmVskylhCglFWU5cEwUSXE3qUwKWskUlAKWCJZQnAuCMWAhgDKRDKP749_G2a8WfCi3tnWmiywpIRnhnbW8o8iRks5670CVjdO1cD8lwWXvuew9l73n8uS5u7k73mgA-Od50qVTnPwBKvR8kw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2117188079</pqid></control><display><type>article</type><title>Active Vision via Extremum Seeking for Robots in Unstructured Environments: Applications in Object Recognition and Manipulation</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Calli, Berk ; Caarls, Wouter ; Wisse, Martijn ; Jonker, Pieter P.</creator><creatorcontrib>Calli, Berk ; Caarls, Wouter ; Wisse, Martijn ; Jonker, Pieter P.</creatorcontrib><description>In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an explicit objective function or any other task model to calculate the gradient direction for viewpoint optimization. This brings new possibilities for the use of active vision in unstructured environments, since a priori knowledge of the surroundings and the target objects is not required. 2) ESC conducts continuous optimization backed up with mechanisms to escape from local maxima. This enables an efficient execution of an active vision task. We demonstrate our approach with two applications in the object recognition and manipulation fields, where the model-free approach brings various benefits: for object recognition, our framework removes the dependence on offline training data for viewpoint optimization, and provides robustness of the system to occlusions and changing lighting conditions. In object manipulation, the model-free approach allows us to increase the success rate of a grasp synthesis algorithm without the need of an object model; the algorithm only uses continuous measurements of the objective value, i.e., the grasp quality. Our experiments show that continuous viewpoint optimization can efficiently increase the data quality for the underlying algorithm, while maintaining the robustness. Note to Practitioners -Vision sensors provide robots flexibility and robustness both in industrial and domestic settings by supplying required data to analyze the surroundings and the state of the task. However, the quality of these data can be very high or poor depending on the viewing angle of the vision sensor. For example, if the robot aims to recognize an object, images taken from certain angles (e.g., feature rich surfaces) can be more descriptive than the others, or if the robot's goal is to manipulate an object, observing it from a viewpoint that reveals easy-to-grasp "handles" makes the task simpler to execute. The algorithm presented in this paper aims to provide the robot high quality visual data relative to the task at hand by changing vision sensors' viewpoint. Different from other methods in the literature, our method does not require any task models (therefore, it is model free), and only utilizes a quality value that can be measured from the current viewpoint (e.g., object recognition success rate for the current image). The viewpoint of the sensor is changed continuously for increasing the quality value until the robot is confident enough about the success of the execution. We demonstrate the application of the algorithm in the object recognition and manipulation domains. Nevertheless, it can be applied to many other robotics tasks, where viewing angle of the scene affects the robot's performance.</description><identifier>ISSN: 1545-5955</identifier><identifier>EISSN: 1558-3783</identifier><identifier>DOI: 10.1109/TASE.2018.2807787</identifier><identifier>CODEN: ITASC7</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Active vision ; Algorithms ; Artificial neural networks ; Dependence ; Domains ; extremum seeking control (ESC) ; Grasping ; manipulation ; Maxima ; Object recognition ; Optimization ; Quality ; Robot sensing systems ; Robots ; Robustness ; Sensors ; Success ; Task analysis ; Viewing ; Vision ; Vision sensors</subject><ispartof>IEEE transactions on automation science and engineering, 2018-10, Vol.15 (4), p.1810-1822</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c450t-7cf6c2d457f411ffcf259e801f140fcff7ca2dc4effe53a53209a100e0aae25a3</citedby><cites>FETCH-LOGICAL-c450t-7cf6c2d457f411ffcf259e801f140fcff7ca2dc4effe53a53209a100e0aae25a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8310020$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Calli, Berk</creatorcontrib><creatorcontrib>Caarls, Wouter</creatorcontrib><creatorcontrib>Wisse, Martijn</creatorcontrib><creatorcontrib>Jonker, Pieter P.</creatorcontrib><title>Active Vision via Extremum Seeking for Robots in Unstructured Environments: Applications in Object Recognition and Manipulation</title><title>IEEE transactions on automation science and engineering</title><addtitle>TASE</addtitle><description>In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an explicit objective function or any other task model to calculate the gradient direction for viewpoint optimization. This brings new possibilities for the use of active vision in unstructured environments, since a priori knowledge of the surroundings and the target objects is not required. 2) ESC conducts continuous optimization backed up with mechanisms to escape from local maxima. This enables an efficient execution of an active vision task. We demonstrate our approach with two applications in the object recognition and manipulation fields, where the model-free approach brings various benefits: for object recognition, our framework removes the dependence on offline training data for viewpoint optimization, and provides robustness of the system to occlusions and changing lighting conditions. In object manipulation, the model-free approach allows us to increase the success rate of a grasp synthesis algorithm without the need of an object model; the algorithm only uses continuous measurements of the objective value, i.e., the grasp quality. Our experiments show that continuous viewpoint optimization can efficiently increase the data quality for the underlying algorithm, while maintaining the robustness. Note to Practitioners -Vision sensors provide robots flexibility and robustness both in industrial and domestic settings by supplying required data to analyze the surroundings and the state of the task. However, the quality of these data can be very high or poor depending on the viewing angle of the vision sensor. For example, if the robot aims to recognize an object, images taken from certain angles (e.g., feature rich surfaces) can be more descriptive than the others, or if the robot's goal is to manipulate an object, observing it from a viewpoint that reveals easy-to-grasp "handles" makes the task simpler to execute. The algorithm presented in this paper aims to provide the robot high quality visual data relative to the task at hand by changing vision sensors' viewpoint. Different from other methods in the literature, our method does not require any task models (therefore, it is model free), and only utilizes a quality value that can be measured from the current viewpoint (e.g., object recognition success rate for the current image). The viewpoint of the sensor is changed continuously for increasing the quality value until the robot is confident enough about the success of the execution. We demonstrate the application of the algorithm in the object recognition and manipulation domains. Nevertheless, it can be applied to many other robotics tasks, where viewing angle of the scene affects the robot's performance.</description><subject>Active vision</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Dependence</subject><subject>Domains</subject><subject>extremum seeking control (ESC)</subject><subject>Grasping</subject><subject>manipulation</subject><subject>Maxima</subject><subject>Object recognition</subject><subject>Optimization</subject><subject>Quality</subject><subject>Robot sensing systems</subject><subject>Robots</subject><subject>Robustness</subject><subject>Sensors</subject><subject>Success</subject><subject>Task analysis</subject><subject>Viewing</subject><subject>Vision</subject><subject>Vision sensors</subject><issn>1545-5955</issn><issn>1558-3783</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNo9kF1LwzAUhosoOKc_QLwJeN2ZpI1NvStjfsBksA9vS5aejMw1qUk69Mq_brsNr84HzzkvPFF0S_CIEJw_LIvFZEQx4SPKcZbx7CwaEMZ4nGQ8Oe_7lMUsZ-wyuvJ-izFNeY4H0W8hg94D-tBeW4P2WqDJd3BQtzVaAHxqs0HKOjS3axs80gatjA-ulaF1UKGJ2WtnTQ0m-CdUNM1OSxG6Twd0tt6CDGgO0m6M7tdImAq9C6ObdnfgrqMLJXYebk51GK2eJ8vxazydvbyNi2ksU4ZDnEn1KGmVskylhCglFWU5cEwUSXE3qUwKWskUlAKWCJZQnAuCMWAhgDKRDKP749_G2a8WfCi3tnWmiywpIRnhnbW8o8iRks5670CVjdO1cD8lwWXvuew9l73n8uS5u7k73mgA-Od50qVTnPwBKvR8kw</recordid><startdate>20181001</startdate><enddate>20181001</enddate><creator>Calli, Berk</creator><creator>Caarls, Wouter</creator><creator>Wisse, Martijn</creator><creator>Jonker, Pieter P.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20181001</creationdate><title>Active Vision via Extremum Seeking for Robots in Unstructured Environments: Applications in Object Recognition and Manipulation</title><author>Calli, Berk ; Caarls, Wouter ; Wisse, Martijn ; Jonker, Pieter P.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c450t-7cf6c2d457f411ffcf259e801f140fcff7ca2dc4effe53a53209a100e0aae25a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Active vision</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Dependence</topic><topic>Domains</topic><topic>extremum seeking control (ESC)</topic><topic>Grasping</topic><topic>manipulation</topic><topic>Maxima</topic><topic>Object recognition</topic><topic>Optimization</topic><topic>Quality</topic><topic>Robot sensing systems</topic><topic>Robots</topic><topic>Robustness</topic><topic>Sensors</topic><topic>Success</topic><topic>Task analysis</topic><topic>Viewing</topic><topic>Vision</topic><topic>Vision sensors</topic><toplevel>online_resources</toplevel><creatorcontrib>Calli, Berk</creatorcontrib><creatorcontrib>Caarls, Wouter</creatorcontrib><creatorcontrib>Wisse, Martijn</creatorcontrib><creatorcontrib>Jonker, Pieter P.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE/IET Electronic Library</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on automation science and engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Calli, Berk</au><au>Caarls, Wouter</au><au>Wisse, Martijn</au><au>Jonker, Pieter P.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Active Vision via Extremum Seeking for Robots in Unstructured Environments: Applications in Object Recognition and Manipulation</atitle><jtitle>IEEE transactions on automation science and engineering</jtitle><stitle>TASE</stitle><date>2018-10-01</date><risdate>2018</risdate><volume>15</volume><issue>4</issue><spage>1810</spage><epage>1822</epage><pages>1810-1822</pages><issn>1545-5955</issn><eissn>1558-3783</eissn><coden>ITASC7</coden><abstract>In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an explicit objective function or any other task model to calculate the gradient direction for viewpoint optimization. This brings new possibilities for the use of active vision in unstructured environments, since a priori knowledge of the surroundings and the target objects is not required. 2) ESC conducts continuous optimization backed up with mechanisms to escape from local maxima. This enables an efficient execution of an active vision task. We demonstrate our approach with two applications in the object recognition and manipulation fields, where the model-free approach brings various benefits: for object recognition, our framework removes the dependence on offline training data for viewpoint optimization, and provides robustness of the system to occlusions and changing lighting conditions. In object manipulation, the model-free approach allows us to increase the success rate of a grasp synthesis algorithm without the need of an object model; the algorithm only uses continuous measurements of the objective value, i.e., the grasp quality. Our experiments show that continuous viewpoint optimization can efficiently increase the data quality for the underlying algorithm, while maintaining the robustness. Note to Practitioners -Vision sensors provide robots flexibility and robustness both in industrial and domestic settings by supplying required data to analyze the surroundings and the state of the task. However, the quality of these data can be very high or poor depending on the viewing angle of the vision sensor. For example, if the robot aims to recognize an object, images taken from certain angles (e.g., feature rich surfaces) can be more descriptive than the others, or if the robot's goal is to manipulate an object, observing it from a viewpoint that reveals easy-to-grasp "handles" makes the task simpler to execute. The algorithm presented in this paper aims to provide the robot high quality visual data relative to the task at hand by changing vision sensors' viewpoint. Different from other methods in the literature, our method does not require any task models (therefore, it is model free), and only utilizes a quality value that can be measured from the current viewpoint (e.g., object recognition success rate for the current image). The viewpoint of the sensor is changed continuously for increasing the quality value until the robot is confident enough about the success of the execution. We demonstrate the application of the algorithm in the object recognition and manipulation domains. Nevertheless, it can be applied to many other robotics tasks, where viewing angle of the scene affects the robot's performance.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TASE.2018.2807787</doi><tpages>13</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1545-5955
ispartof IEEE transactions on automation science and engineering, 2018-10, Vol.15 (4), p.1810-1822
issn 1545-5955
1558-3783
language eng
recordid cdi_ieee_primary_8310020
source IEEE Electronic Library (IEL) Journals
subjects Active vision
Algorithms
Artificial neural networks
Dependence
Domains
extremum seeking control (ESC)
Grasping
manipulation
Maxima
Object recognition
Optimization
Quality
Robot sensing systems
Robots
Robustness
Sensors
Success
Task analysis
Viewing
Vision
Vision sensors
title Active Vision via Extremum Seeking for Robots in Unstructured Environments: Applications in Object Recognition and Manipulation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T00%3A03%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Active%20Vision%20via%20Extremum%20Seeking%20for%20Robots%20in%20Unstructured%20Environments:%20Applications%20in%20Object%20Recognition%20and%20Manipulation&rft.jtitle=IEEE%20transactions%20on%20automation%20science%20and%20engineering&rft.au=Calli,%20Berk&rft.date=2018-10-01&rft.volume=15&rft.issue=4&rft.spage=1810&rft.epage=1822&rft.pages=1810-1822&rft.issn=1545-5955&rft.eissn=1558-3783&rft.coden=ITASC7&rft_id=info:doi/10.1109/TASE.2018.2807787&rft_dat=%3Cproquest_ieee_%3E2117188079%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c450t-7cf6c2d457f411ffcf259e801f140fcff7ca2dc4effe53a53209a100e0aae25a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2117188079&rft_id=info:pmid/&rft_ieee_id=8310020&rfr_iscdi=true