Loading…

DeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot Interaction

Symbolic planning and reasoning are powerful tools for robots tackling complex tasks. However, the need to manually design the symbols restrict their applicability, especially for robots that are expected to act in open-ended environments. Therefore symbol formation and rule extraction should be con...

Full description

Saved in:
Bibliographic Details
Published in:The Journal of artificial intelligence research 2022-01, Vol.75, p.709-745
Main Authors: Ahmetoglu, Alper, Seker, M. Yunus, Piater, Justus, Oztop, Erhan, Ugur, Emre
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c329t-9851c89b075c8365527a220010b2ab4a22a8c38af6b7f540484a60a7dad3cc743
cites
container_end_page 745
container_issue
container_start_page 709
container_title The Journal of artificial intelligence research
container_volume 75
creator Ahmetoglu, Alper
Seker, M. Yunus
Piater, Justus
Oztop, Erhan
Ugur, Emre
description Symbolic planning and reasoning are powerful tools for robots tackling complex tasks. However, the need to manually design the symbols restrict their applicability, especially for robots that are expected to act in open-ended environments. Therefore symbol formation and rule extraction should be considered part of robot learning, which, when done properly, will offer scalability, flexibility, and robustness. Towards this goal, we propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning. Our robot interacts with objects using an initial action repertoire that is assumed to be acquired earlier and observes the effects it can create in the environment. To form action-grounded object, effect, and relational categories, we employ a binary bottleneck layer in a predictive, deep encoderdecoder network that takes the image of the scene and the action applied as input, and generates the resulting effects in the scene in pixel coordinates. After learning, the binary latent vector represents action-driven object categories based on the interaction experience of the robot. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, a decision tree is trained to reproduce its decoder function. Probabilistic rules are extracted from the decision paths of the tree and are represented in the Probabilistic Planning Domain Definition Language (PPDDL), allowing off-the-shelf planners to operate on the knowledge extracted from the sensorimotor experience of the robot. The deployment of the proposed approach for a simulated robotic manipulator enabled the discovery of discrete representations of object properties such as ‘rollable’ and ‘insertable’. In turn, the use of these representations as symbols allowed the generation of effective plans for achieving goals, such as building towers of the desired height, demonstrating the effectiveness of the approach for multi-step object manipulation. Finally, we demonstrate that the system is not only restricted to the robotics domain by assessing its applicability to the MNIST 8-puzzle domain in which learned symbols allow for the generation of plans that move the empty tile into any given position.
doi_str_mv 10.1613/jair.1.13754
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2742876384</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2742876384</sourcerecordid><originalsourceid>FETCH-LOGICAL-c329t-9851c89b075c8365527a220010b2ab4a22a8c38af6b7f540484a60a7dad3cc743</originalsourceid><addsrcrecordid>eNpNkM1KAzEUhYMoWKs7HyDg1qn5nWTcSa21UFDUrkOSZmTKNBmTGaFvb8a6cHW_C4dz7zkAXGM0wyWmdzvdxBmeYSo4OwETjERZVIKL0398Di5S2iGEK0bkBJhH57r3w_4ejgAzmdDCpfMu6r4JHmq_hW9D6-Da6egb_wnrEOFrq_1xiWEPNz4NnYvfTXJZHEzo4cr32cGOFpfgrNZtcld_cwo2T4uP-XOxflmu5g_rwlJS9UUlObayMkhwK2nJORGakPwoMkQblllLS6WuSyNqzhCTTJdIi63eUmsFo1Nwc_TtYvgaXOrVLgzR55OKiBxWlFSOqtujysaQUnS16mKz1_GgMFJji2psUWH12yL9ARn0ZXM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2742876384</pqid></control><display><type>article</type><title>DeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot Interaction</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Ahmetoglu, Alper ; Seker, M. Yunus ; Piater, Justus ; Oztop, Erhan ; Ugur, Emre</creator><creatorcontrib>Ahmetoglu, Alper ; Seker, M. Yunus ; Piater, Justus ; Oztop, Erhan ; Ugur, Emre</creatorcontrib><description>Symbolic planning and reasoning are powerful tools for robots tackling complex tasks. However, the need to manually design the symbols restrict their applicability, especially for robots that are expected to act in open-ended environments. Therefore symbol formation and rule extraction should be considered part of robot learning, which, when done properly, will offer scalability, flexibility, and robustness. Towards this goal, we propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning. Our robot interacts with objects using an initial action repertoire that is assumed to be acquired earlier and observes the effects it can create in the environment. To form action-grounded object, effect, and relational categories, we employ a binary bottleneck layer in a predictive, deep encoderdecoder network that takes the image of the scene and the action applied as input, and generates the resulting effects in the scene in pixel coordinates. After learning, the binary latent vector represents action-driven object categories based on the interaction experience of the robot. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, a decision tree is trained to reproduce its decoder function. Probabilistic rules are extracted from the decision paths of the tree and are represented in the Probabilistic Planning Domain Definition Language (PPDDL), allowing off-the-shelf planners to operate on the knowledge extracted from the sensorimotor experience of the robot. The deployment of the proposed approach for a simulated robotic manipulator enabled the discovery of discrete representations of object properties such as ‘rollable’ and ‘insertable’. In turn, the use of these representations as symbols allowed the generation of effective plans for achieving goals, such as building towers of the desired height, demonstrating the effectiveness of the approach for multi-step object manipulation. Finally, we demonstrate that the system is not only restricted to the robotics domain by assessing its applicability to the MNIST 8-puzzle domain in which learned symbols allow for the generation of plans that move the empty tile into any given position.</description><identifier>ISSN: 1076-9757</identifier><identifier>EISSN: 1076-9757</identifier><identifier>EISSN: 1943-5037</identifier><identifier>DOI: 10.1613/jair.1.13754</identifier><language>eng</language><publisher>San Francisco: AI Access Foundation</publisher><subject>Artificial intelligence ; Categories ; Decision trees ; Domains ; Knowledge representation ; Neural networks ; Planning ; Probability theory ; Reasoning ; Robot arms ; Robot learning ; Robotics ; Robots ; Symbols ; Task complexity</subject><ispartof>The Journal of artificial intelligence research, 2022-01, Vol.75, p.709-745</ispartof><rights>2022. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://www.jair.org/index.php/jair/about</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c329t-9851c89b075c8365527a220010b2ab4a22a8c38af6b7f540484a60a7dad3cc743</citedby><orcidid>0000-0003-1330-6781</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2742876384?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Ahmetoglu, Alper</creatorcontrib><creatorcontrib>Seker, M. Yunus</creatorcontrib><creatorcontrib>Piater, Justus</creatorcontrib><creatorcontrib>Oztop, Erhan</creatorcontrib><creatorcontrib>Ugur, Emre</creatorcontrib><title>DeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot Interaction</title><title>The Journal of artificial intelligence research</title><description>Symbolic planning and reasoning are powerful tools for robots tackling complex tasks. However, the need to manually design the symbols restrict their applicability, especially for robots that are expected to act in open-ended environments. Therefore symbol formation and rule extraction should be considered part of robot learning, which, when done properly, will offer scalability, flexibility, and robustness. Towards this goal, we propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning. Our robot interacts with objects using an initial action repertoire that is assumed to be acquired earlier and observes the effects it can create in the environment. To form action-grounded object, effect, and relational categories, we employ a binary bottleneck layer in a predictive, deep encoderdecoder network that takes the image of the scene and the action applied as input, and generates the resulting effects in the scene in pixel coordinates. After learning, the binary latent vector represents action-driven object categories based on the interaction experience of the robot. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, a decision tree is trained to reproduce its decoder function. Probabilistic rules are extracted from the decision paths of the tree and are represented in the Probabilistic Planning Domain Definition Language (PPDDL), allowing off-the-shelf planners to operate on the knowledge extracted from the sensorimotor experience of the robot. The deployment of the proposed approach for a simulated robotic manipulator enabled the discovery of discrete representations of object properties such as ‘rollable’ and ‘insertable’. In turn, the use of these representations as symbols allowed the generation of effective plans for achieving goals, such as building towers of the desired height, demonstrating the effectiveness of the approach for multi-step object manipulation. Finally, we demonstrate that the system is not only restricted to the robotics domain by assessing its applicability to the MNIST 8-puzzle domain in which learned symbols allow for the generation of plans that move the empty tile into any given position.</description><subject>Artificial intelligence</subject><subject>Categories</subject><subject>Decision trees</subject><subject>Domains</subject><subject>Knowledge representation</subject><subject>Neural networks</subject><subject>Planning</subject><subject>Probability theory</subject><subject>Reasoning</subject><subject>Robot arms</subject><subject>Robot learning</subject><subject>Robotics</subject><subject>Robots</subject><subject>Symbols</subject><subject>Task complexity</subject><issn>1076-9757</issn><issn>1076-9757</issn><issn>1943-5037</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpNkM1KAzEUhYMoWKs7HyDg1qn5nWTcSa21UFDUrkOSZmTKNBmTGaFvb8a6cHW_C4dz7zkAXGM0wyWmdzvdxBmeYSo4OwETjERZVIKL0398Di5S2iGEK0bkBJhH57r3w_4ejgAzmdDCpfMu6r4JHmq_hW9D6-Da6egb_wnrEOFrq_1xiWEPNz4NnYvfTXJZHEzo4cr32cGOFpfgrNZtcld_cwo2T4uP-XOxflmu5g_rwlJS9UUlObayMkhwK2nJORGakPwoMkQblllLS6WuSyNqzhCTTJdIi63eUmsFo1Nwc_TtYvgaXOrVLgzR55OKiBxWlFSOqtujysaQUnS16mKz1_GgMFJji2psUWH12yL9ARn0ZXM</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Ahmetoglu, Alper</creator><creator>Seker, M. Yunus</creator><creator>Piater, Justus</creator><creator>Oztop, Erhan</creator><creator>Ugur, Emre</creator><general>AI Access Foundation</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0003-1330-6781</orcidid></search><sort><creationdate>20220101</creationdate><title>DeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot Interaction</title><author>Ahmetoglu, Alper ; Seker, M. Yunus ; Piater, Justus ; Oztop, Erhan ; Ugur, Emre</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c329t-9851c89b075c8365527a220010b2ab4a22a8c38af6b7f540484a60a7dad3cc743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial intelligence</topic><topic>Categories</topic><topic>Decision trees</topic><topic>Domains</topic><topic>Knowledge representation</topic><topic>Neural networks</topic><topic>Planning</topic><topic>Probability theory</topic><topic>Reasoning</topic><topic>Robot arms</topic><topic>Robot learning</topic><topic>Robotics</topic><topic>Robots</topic><topic>Symbols</topic><topic>Task complexity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ahmetoglu, Alper</creatorcontrib><creatorcontrib>Seker, M. Yunus</creatorcontrib><creatorcontrib>Piater, Justus</creatorcontrib><creatorcontrib>Oztop, Erhan</creatorcontrib><creatorcontrib>Ugur, Emre</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>The Journal of artificial intelligence research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ahmetoglu, Alper</au><au>Seker, M. Yunus</au><au>Piater, Justus</au><au>Oztop, Erhan</au><au>Ugur, Emre</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot Interaction</atitle><jtitle>The Journal of artificial intelligence research</jtitle><date>2022-01-01</date><risdate>2022</risdate><volume>75</volume><spage>709</spage><epage>745</epage><pages>709-745</pages><issn>1076-9757</issn><eissn>1076-9757</eissn><eissn>1943-5037</eissn><abstract>Symbolic planning and reasoning are powerful tools for robots tackling complex tasks. However, the need to manually design the symbols restrict their applicability, especially for robots that are expected to act in open-ended environments. Therefore symbol formation and rule extraction should be considered part of robot learning, which, when done properly, will offer scalability, flexibility, and robustness. Towards this goal, we propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning. Our robot interacts with objects using an initial action repertoire that is assumed to be acquired earlier and observes the effects it can create in the environment. To form action-grounded object, effect, and relational categories, we employ a binary bottleneck layer in a predictive, deep encoderdecoder network that takes the image of the scene and the action applied as input, and generates the resulting effects in the scene in pixel coordinates. After learning, the binary latent vector represents action-driven object categories based on the interaction experience of the robot. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, a decision tree is trained to reproduce its decoder function. Probabilistic rules are extracted from the decision paths of the tree and are represented in the Probabilistic Planning Domain Definition Language (PPDDL), allowing off-the-shelf planners to operate on the knowledge extracted from the sensorimotor experience of the robot. The deployment of the proposed approach for a simulated robotic manipulator enabled the discovery of discrete representations of object properties such as ‘rollable’ and ‘insertable’. In turn, the use of these representations as symbols allowed the generation of effective plans for achieving goals, such as building towers of the desired height, demonstrating the effectiveness of the approach for multi-step object manipulation. Finally, we demonstrate that the system is not only restricted to the robotics domain by assessing its applicability to the MNIST 8-puzzle domain in which learned symbols allow for the generation of plans that move the empty tile into any given position.</abstract><cop>San Francisco</cop><pub>AI Access Foundation</pub><doi>10.1613/jair.1.13754</doi><tpages>37</tpages><orcidid>https://orcid.org/0000-0003-1330-6781</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1076-9757
ispartof The Journal of artificial intelligence research, 2022-01, Vol.75, p.709-745
issn 1076-9757
1076-9757
1943-5037
language eng
recordid cdi_proquest_journals_2742876384
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Artificial intelligence
Categories
Decision trees
Domains
Knowledge representation
Neural networks
Planning
Probability theory
Reasoning
Robot arms
Robot learning
Robotics
Robots
Symbols
Task complexity
title DeepSym: Deep Symbol Generation and Rule Learning for Planning from Unsupervised Robot Interaction
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T16%3A53%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeepSym:%20Deep%20Symbol%20Generation%20and%20Rule%20Learning%20for%20Planning%20from%20Unsupervised%20Robot%20Interaction&rft.jtitle=The%20Journal%20of%20artificial%20intelligence%20research&rft.au=Ahmetoglu,%20Alper&rft.date=2022-01-01&rft.volume=75&rft.spage=709&rft.epage=745&rft.pages=709-745&rft.issn=1076-9757&rft.eissn=1076-9757&rft_id=info:doi/10.1613/jair.1.13754&rft_dat=%3Cproquest_cross%3E2742876384%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c329t-9851c89b075c8365527a220010b2ab4a22a8c38af6b7f540484a60a7dad3cc743%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2742876384&rft_id=info:pmid/&rfr_iscdi=true