Loading…
Deliberative Acting, Online Planning and Learning with Hierarchical Operational Models
In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, i...
Saved in:
Published in: | arXiv.org 2021-11 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Patra, Sunandita Mason, James Ghallab, Malik Nau, Dana Traverso, Paolo |
description | In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation. Deliberative actors, which integrate acting and planning, have typically needed to use both of these models together -- which causes problems when attempting to develop the different models, verify their consistency, and smoothly interleave acting and planning. As an alternative, we define and implement an integrated acting and planning system in which both planning and acting use the same operational models. These rely on hierarchical task-oriented refinement methods offering rich control structures. The acting component, called Reactive Acting Engine (RAE), is inspired by the well-known PRS system. At each decision step, RAE can get advice from a planner for a near-optimal choice with respect to a utility function. The anytime planner uses a UCT-like Monte Carlo Tree Search procedure, called UPOM, whose rollouts are simulations of the actor's operational models. We also present learning strategies for use with RAE and UPOM that acquire, from online acting experiences and/or simulated planning results, a mapping from decision contexts to method instances as well as a heuristic function to guide UPOM. We demonstrate the asymptotic convergence of UPOM towards optimal methods in static domains, and show experimentally that UPOM and the learning strategies significantly improve the acting efficiency and robustness. |
doi_str_mv | 10.48550/arxiv.2010.01909 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2448766822</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2448766822</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-96f0516a43de4ab803e7dc887a1cec9706b57abc0e79396684c49bc8934369c83</originalsourceid><addsrcrecordid>eNotjk1LAzEYhIMgWGp_gLeAV7dm851jqR8VVtZD8Vqy2bc2JWRrsq3-fEPtaWYemGEQuqvJnGshyKNNv_40p6QAUhtirtCEMlZXmlN6g2Y57wkhVCoqBJugzycIvoNkR38CvHCjj18PuI3BR8AfwcZYALaxxw3YdA4_ftzhlS-d5Hbe2YDbw3lgiMW_Dz2EfIuutzZkmF10itYvz-vlqmra17floqmsoLQycktELS1nPXDbacJA9U5rZWsHzigiO6Fs5wgow4yUmjtuOqcN40wap9kU3f_PHtLwfYQ8bvbDMZUbeUM516pUKGV_AGNSPA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2448766822</pqid></control><display><type>article</type><title>Deliberative Acting, Online Planning and Learning with Hierarchical Operational Models</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Patra, Sunandita ; Mason, James ; Ghallab, Malik ; Nau, Dana ; Traverso, Paolo</creator><creatorcontrib>Patra, Sunandita ; Mason, James ; Ghallab, Malik ; Nau, Dana ; Traverso, Paolo</creatorcontrib><description>In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation. Deliberative actors, which integrate acting and planning, have typically needed to use both of these models together -- which causes problems when attempting to develop the different models, verify their consistency, and smoothly interleave acting and planning. As an alternative, we define and implement an integrated acting and planning system in which both planning and acting use the same operational models. These rely on hierarchical task-oriented refinement methods offering rich control structures. The acting component, called Reactive Acting Engine (RAE), is inspired by the well-known PRS system. At each decision step, RAE can get advice from a planner for a near-optimal choice with respect to a utility function. The anytime planner uses a UCT-like Monte Carlo Tree Search procedure, called UPOM, whose rollouts are simulations of the actor's operational models. We also present learning strategies for use with RAE and UPOM that acquire, from online acting experiences and/or simulated planning results, a mapping from decision contexts to method instances as well as a heuristic function to guide UPOM. We demonstrate the asymptotic convergence of UPOM towards optimal methods in static domains, and show experimentally that UPOM and the learning strategies significantly improve the acting efficiency and robustness.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2010.01909</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer simulation ; Decision making ; Learning ; Mapping</subject><ispartof>arXiv.org, 2021-11</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2448766822?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,27904,36991,44569</link.rule.ids></links><search><creatorcontrib>Patra, Sunandita</creatorcontrib><creatorcontrib>Mason, James</creatorcontrib><creatorcontrib>Ghallab, Malik</creatorcontrib><creatorcontrib>Nau, Dana</creatorcontrib><creatorcontrib>Traverso, Paolo</creatorcontrib><title>Deliberative Acting, Online Planning and Learning with Hierarchical Operational Models</title><title>arXiv.org</title><description>In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation. Deliberative actors, which integrate acting and planning, have typically needed to use both of these models together -- which causes problems when attempting to develop the different models, verify their consistency, and smoothly interleave acting and planning. As an alternative, we define and implement an integrated acting and planning system in which both planning and acting use the same operational models. These rely on hierarchical task-oriented refinement methods offering rich control structures. The acting component, called Reactive Acting Engine (RAE), is inspired by the well-known PRS system. At each decision step, RAE can get advice from a planner for a near-optimal choice with respect to a utility function. The anytime planner uses a UCT-like Monte Carlo Tree Search procedure, called UPOM, whose rollouts are simulations of the actor's operational models. We also present learning strategies for use with RAE and UPOM that acquire, from online acting experiences and/or simulated planning results, a mapping from decision contexts to method instances as well as a heuristic function to guide UPOM. We demonstrate the asymptotic convergence of UPOM towards optimal methods in static domains, and show experimentally that UPOM and the learning strategies significantly improve the acting efficiency and robustness.</description><subject>Computer simulation</subject><subject>Decision making</subject><subject>Learning</subject><subject>Mapping</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjk1LAzEYhIMgWGp_gLeAV7dm851jqR8VVtZD8Vqy2bc2JWRrsq3-fEPtaWYemGEQuqvJnGshyKNNv_40p6QAUhtirtCEMlZXmlN6g2Y57wkhVCoqBJugzycIvoNkR38CvHCjj18PuI3BR8AfwcZYALaxxw3YdA4_ftzhlS-d5Hbe2YDbw3lgiMW_Dz2EfIuutzZkmF10itYvz-vlqmra17floqmsoLQycktELS1nPXDbacJA9U5rZWsHzigiO6Fs5wgow4yUmjtuOqcN40wap9kU3f_PHtLwfYQ8bvbDMZUbeUM516pUKGV_AGNSPA</recordid><startdate>20211115</startdate><enddate>20211115</enddate><creator>Patra, Sunandita</creator><creator>Mason, James</creator><creator>Ghallab, Malik</creator><creator>Nau, Dana</creator><creator>Traverso, Paolo</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211115</creationdate><title>Deliberative Acting, Online Planning and Learning with Hierarchical Operational Models</title><author>Patra, Sunandita ; Mason, James ; Ghallab, Malik ; Nau, Dana ; Traverso, Paolo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-96f0516a43de4ab803e7dc887a1cec9706b57abc0e79396684c49bc8934369c83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer simulation</topic><topic>Decision making</topic><topic>Learning</topic><topic>Mapping</topic><toplevel>online_resources</toplevel><creatorcontrib>Patra, Sunandita</creatorcontrib><creatorcontrib>Mason, James</creatorcontrib><creatorcontrib>Ghallab, Malik</creatorcontrib><creatorcontrib>Nau, Dana</creatorcontrib><creatorcontrib>Traverso, Paolo</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Patra, Sunandita</au><au>Mason, James</au><au>Ghallab, Malik</au><au>Nau, Dana</au><au>Traverso, Paolo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deliberative Acting, Online Planning and Learning with Hierarchical Operational Models</atitle><jtitle>arXiv.org</jtitle><date>2021-11-15</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>In AI research, synthesizing a plan of action has typically used descriptive models of the actions that abstractly specify what might happen as a result of an action, and are tailored for efficiently computing state transitions. However, executing the planned actions has needed operational models, in which rich computational control structures and closed-loop online decision-making are used to specify how to perform an action in a nondeterministic execution context, react to events and adapt to an unfolding situation. Deliberative actors, which integrate acting and planning, have typically needed to use both of these models together -- which causes problems when attempting to develop the different models, verify their consistency, and smoothly interleave acting and planning. As an alternative, we define and implement an integrated acting and planning system in which both planning and acting use the same operational models. These rely on hierarchical task-oriented refinement methods offering rich control structures. The acting component, called Reactive Acting Engine (RAE), is inspired by the well-known PRS system. At each decision step, RAE can get advice from a planner for a near-optimal choice with respect to a utility function. The anytime planner uses a UCT-like Monte Carlo Tree Search procedure, called UPOM, whose rollouts are simulations of the actor's operational models. We also present learning strategies for use with RAE and UPOM that acquire, from online acting experiences and/or simulated planning results, a mapping from decision contexts to method instances as well as a heuristic function to guide UPOM. We demonstrate the asymptotic convergence of UPOM towards optimal methods in static domains, and show experimentally that UPOM and the learning strategies significantly improve the acting efficiency and robustness.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2010.01909</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2448766822 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Computer simulation Decision making Learning Mapping |
title | Deliberative Acting, Online Planning and Learning with Hierarchical Operational Models |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T11%3A06%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deliberative%20Acting,%20Online%20Planning%20and%20Learning%20with%20Hierarchical%20Operational%20Models&rft.jtitle=arXiv.org&rft.au=Patra,%20Sunandita&rft.date=2021-11-15&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2010.01909&rft_dat=%3Cproquest%3E2448766822%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a522-96f0516a43de4ab803e7dc887a1cec9706b57abc0e79396684c49bc8934369c83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2448766822&rft_id=info:pmid/&rfr_iscdi=true |