Loading…

Planning with Durative Actions in Stochastic Domains

Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative actions. This poses severe restrictions in modeling and solving a real world planning problem. We extend the MDP model to incor...

Full description

Saved in:
Bibliographic Details
Published in:The Journal of artificial intelligence research 2008-01, Vol.31, p.33-82
Main Authors: Mausam, Weld, D. S.
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c257t-1c9ad76a16a8f51f09a703e30a632eef4e6d6c520e52dc7c63e050f791c16b83
cites
container_end_page 82
container_issue
container_start_page 33
container_title The Journal of artificial intelligence research
container_volume 31
creator Mausam
Weld, D. S.
description Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative actions. This poses severe restrictions in modeling and solving a real world planning problem. We extend the MDP model to incorporate 1) simultaneous action execution, 2) durative actions, and 3) stochastic durations. We develop several algorithms to combat the computational explosion introduced by these features. The key theoretical ideas used in building these algorithms are -- modeling a complex problem as an MDP in extended state/action space, pruning of irrelevant actions, sampling of relevant actions, using informed heuristics to guide the search, hybridizing different planners to achieve benefits of both, approximating the problem and replanning. Our empirical evaluation illuminates the different merits in using various algorithms, viz., optimality, empirical closeness to optimality, theoretical error bounds, and speed.
doi_str_mv 10.1613/jair.2269
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2554116476</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2554116476</sourcerecordid><originalsourceid>FETCH-LOGICAL-c257t-1c9ad76a16a8f51f09a703e30a632eef4e6d6c520e52dc7c63e050f791c16b83</originalsourceid><addsrcrecordid>eNpNkM1KAzEYRYMoWKsL32DAlYup-ZJJMlmW1j8oKNh9iJnEZmiTmmQU394OdeHq3sXhXjgIXQOeAQd612ufZoRweYImgAWvpWDi9F8_Rxc59xiDbEg7Qc3rVofgw0f17cumWg5JF_9lq7kpPoZc-VC9lWg2OhdvqmXcaR_yJTpzepvt1V9O0frhfr14qlcvj8-L-ao2hIlSg5G6E1wD161j4LDUAlNLseaUWOsayztuGMGWkc4Iw6nFDDshwQB_b-kU3Rxn9yl-DjYX1cchhcOjIow1ALwR_EDdHimTYs7JOrVPfqfTjwKsRidqdKJGJ_QXElZTmw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2554116476</pqid></control><display><type>article</type><title>Planning with Durative Actions in Stochastic Domains</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Mausam ; Weld, D. S.</creator><creatorcontrib>Mausam ; Weld, D. S.</creatorcontrib><description>Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative actions. This poses severe restrictions in modeling and solving a real world planning problem. We extend the MDP model to incorporate 1) simultaneous action execution, 2) durative actions, and 3) stochastic durations. We develop several algorithms to combat the computational explosion introduced by these features. The key theoretical ideas used in building these algorithms are -- modeling a complex problem as an MDP in extended state/action space, pruning of irrelevant actions, sampling of relevant actions, using informed heuristics to guide the search, hybridizing different planners to achieve benefits of both, approximating the problem and replanning. Our empirical evaluation illuminates the different merits in using various algorithms, viz., optimality, empirical closeness to optimality, theoretical error bounds, and speed.</description><identifier>ISSN: 1076-9757</identifier><identifier>EISSN: 1076-9757</identifier><identifier>EISSN: 1943-5037</identifier><identifier>DOI: 10.1613/jair.2269</identifier><language>eng</language><publisher>San Francisco: AI Access Foundation</publisher><subject>Algorithms ; Artificial intelligence ; Markov processes ; Planning</subject><ispartof>The Journal of artificial intelligence research, 2008-01, Vol.31, p.33-82</ispartof><rights>2008. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://www.jair.org/index.php/jair/about</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c257t-1c9ad76a16a8f51f09a703e30a632eef4e6d6c520e52dc7c63e050f791c16b83</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2554116476?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Mausam</creatorcontrib><creatorcontrib>Weld, D. S.</creatorcontrib><title>Planning with Durative Actions in Stochastic Domains</title><title>The Journal of artificial intelligence research</title><description>Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative actions. This poses severe restrictions in modeling and solving a real world planning problem. We extend the MDP model to incorporate 1) simultaneous action execution, 2) durative actions, and 3) stochastic durations. We develop several algorithms to combat the computational explosion introduced by these features. The key theoretical ideas used in building these algorithms are -- modeling a complex problem as an MDP in extended state/action space, pruning of irrelevant actions, sampling of relevant actions, using informed heuristics to guide the search, hybridizing different planners to achieve benefits of both, approximating the problem and replanning. Our empirical evaluation illuminates the different merits in using various algorithms, viz., optimality, empirical closeness to optimality, theoretical error bounds, and speed.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Markov processes</subject><subject>Planning</subject><issn>1076-9757</issn><issn>1076-9757</issn><issn>1943-5037</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2008</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpNkM1KAzEYRYMoWKsL32DAlYup-ZJJMlmW1j8oKNh9iJnEZmiTmmQU394OdeHq3sXhXjgIXQOeAQd612ufZoRweYImgAWvpWDi9F8_Rxc59xiDbEg7Qc3rVofgw0f17cumWg5JF_9lq7kpPoZc-VC9lWg2OhdvqmXcaR_yJTpzepvt1V9O0frhfr14qlcvj8-L-ao2hIlSg5G6E1wD161j4LDUAlNLseaUWOsayztuGMGWkc4Iw6nFDDshwQB_b-kU3Rxn9yl-DjYX1cchhcOjIow1ALwR_EDdHimTYs7JOrVPfqfTjwKsRidqdKJGJ_QXElZTmw</recordid><startdate>20080101</startdate><enddate>20080101</enddate><creator>Mausam</creator><creator>Weld, D. S.</creator><general>AI Access Foundation</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20080101</creationdate><title>Planning with Durative Actions in Stochastic Domains</title><author>Mausam ; Weld, D. S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c257t-1c9ad76a16a8f51f09a703e30a632eef4e6d6c520e52dc7c63e050f791c16b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2008</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Markov processes</topic><topic>Planning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mausam</creatorcontrib><creatorcontrib>Weld, D. S.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer science database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>The Journal of artificial intelligence research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mausam</au><au>Weld, D. S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Planning with Durative Actions in Stochastic Domains</atitle><jtitle>The Journal of artificial intelligence research</jtitle><date>2008-01-01</date><risdate>2008</risdate><volume>31</volume><spage>33</spage><epage>82</epage><pages>33-82</pages><issn>1076-9757</issn><eissn>1076-9757</eissn><eissn>1943-5037</eissn><abstract>Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative actions. This poses severe restrictions in modeling and solving a real world planning problem. We extend the MDP model to incorporate 1) simultaneous action execution, 2) durative actions, and 3) stochastic durations. We develop several algorithms to combat the computational explosion introduced by these features. The key theoretical ideas used in building these algorithms are -- modeling a complex problem as an MDP in extended state/action space, pruning of irrelevant actions, sampling of relevant actions, using informed heuristics to guide the search, hybridizing different planners to achieve benefits of both, approximating the problem and replanning. Our empirical evaluation illuminates the different merits in using various algorithms, viz., optimality, empirical closeness to optimality, theoretical error bounds, and speed.</abstract><cop>San Francisco</cop><pub>AI Access Foundation</pub><doi>10.1613/jair.2269</doi><tpages>50</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1076-9757
ispartof The Journal of artificial intelligence research, 2008-01, Vol.31, p.33-82
issn 1076-9757
1076-9757
1943-5037
language eng
recordid cdi_proquest_journals_2554116476
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Algorithms
Artificial intelligence
Markov processes
Planning
title Planning with Durative Actions in Stochastic Domains
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T14%3A32%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Planning%20with%20Durative%20Actions%20in%20Stochastic%20Domains&rft.jtitle=The%20Journal%20of%20artificial%20intelligence%20research&rft.au=Mausam&rft.date=2008-01-01&rft.volume=31&rft.spage=33&rft.epage=82&rft.pages=33-82&rft.issn=1076-9757&rft.eissn=1076-9757&rft_id=info:doi/10.1613/jair.2269&rft_dat=%3Cproquest_cross%3E2554116476%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c257t-1c9ad76a16a8f51f09a703e30a632eef4e6d6c520e52dc7c63e050f791c16b83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2554116476&rft_id=info:pmid/&rfr_iscdi=true