Loading…

Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties

Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimal...

Full description

Saved in:
Bibliographic Details
Main Authors: Cohen, Jonathan, Mouaddib, Abdel-Illah
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 465
container_issue
container_start_page 458
container_title
container_volume
creator Cohen, Jonathan
Mouaddib, Abdel-Illah
description Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimally plan in sequentially decentralized multiagent systems under uncertainty. In this work, we focus our analysis on team formation and reformation in Decentralized POMDPs with a new model coined Team-POMDPs. We present some interesting structural properties of this model inherited from the field of cooperative game theory. We introduce a Monte Carlo-based planning algorithm to learn locally optimal team-reformation policies that tell our agents how to dynamically rearrange in order to better deal with the evolution of the task at hand. By reforming the team during execution, our experiments show that we are able to achieve higher expected long-term rewards than with stationary teams.
doi_str_mv 10.1109/ICTAI.2018.00077
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8576075</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8576075</ieee_id><sourcerecordid>8576075</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-b3ec7641b367015b2e8d6d6e5a6f49e7ffc3d43dd7af6ea08ab7034b65a161933</originalsourceid><addsrcrecordid>eNotjE1rAjEUANNCodZ6L_SSP7D2ZbPJ2_QmSz8EpSJ6lrfmbUlZE8nuxX9fob3MwBxGiCcFc6XAvSyb3WI5L0HVcwBAvBEzh7UyurZYVc7dikmp0RSgHN6Lh2H4ASjBlHoitusURy4ayn2Sm55iDPFbdinLHdNJbrl4T_lEY0hR7qPnfOWR80ghjpdXuU6ee0nRy01O52sPPDyKu476gWf_nor9-9uu-SxWXx_LZrEqgkIzFq3mI9pKtdoiKNOWXHvrLRuyXeUYu-6ofaW9R-osE9TUIuiqtYaUVU7rqXj--wZmPpxzOFG-HGqDFtDoX6oCT8I</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties</title><source>IEEE Xplore All Conference Series</source><creator>Cohen, Jonathan ; Mouaddib, Abdel-Illah</creator><creatorcontrib>Cohen, Jonathan ; Mouaddib, Abdel-Illah</creatorcontrib><description>Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimally plan in sequentially decentralized multiagent systems under uncertainty. In this work, we focus our analysis on team formation and reformation in Decentralized POMDPs with a new model coined Team-POMDPs. We present some interesting structural properties of this model inherited from the field of cooperative game theory. We introduce a Monte Carlo-based planning algorithm to learn locally optimal team-reformation policies that tell our agents how to dynamically rearrange in order to better deal with the evolution of the task at hand. By reforming the team during execution, our experiments show that we are able to achieve higher expected long-term rewards than with stationary teams.</description><identifier>EISSN: 2375-0197</identifier><identifier>EISBN: 9781538674499</identifier><identifier>EISBN: 1538674491</identifier><identifier>DOI: 10.1109/ICTAI.2018.00077</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Complexity theory ; History ; Monte Carlo methods ; Monte-Carlo Tree Search ; Multiagent systems ; Planning ; Planning under uncertainty ; Task analysis ; Team formation ; Uncertainty</subject><ispartof>2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), 2018, p.458-465</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8576075$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8576075$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Cohen, Jonathan</creatorcontrib><creatorcontrib>Mouaddib, Abdel-Illah</creatorcontrib><title>Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties</title><title>2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)</title><addtitle>TAI</addtitle><description>Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimally plan in sequentially decentralized multiagent systems under uncertainty. In this work, we focus our analysis on team formation and reformation in Decentralized POMDPs with a new model coined Team-POMDPs. We present some interesting structural properties of this model inherited from the field of cooperative game theory. We introduce a Monte Carlo-based planning algorithm to learn locally optimal team-reformation policies that tell our agents how to dynamically rearrange in order to better deal with the evolution of the task at hand. By reforming the team during execution, our experiments show that we are able to achieve higher expected long-term rewards than with stationary teams.</description><subject>Complexity theory</subject><subject>History</subject><subject>Monte Carlo methods</subject><subject>Monte-Carlo Tree Search</subject><subject>Multiagent systems</subject><subject>Planning</subject><subject>Planning under uncertainty</subject><subject>Task analysis</subject><subject>Team formation</subject><subject>Uncertainty</subject><issn>2375-0197</issn><isbn>9781538674499</isbn><isbn>1538674491</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2018</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjE1rAjEUANNCodZ6L_SSP7D2ZbPJ2_QmSz8EpSJ6lrfmbUlZE8nuxX9fob3MwBxGiCcFc6XAvSyb3WI5L0HVcwBAvBEzh7UyurZYVc7dikmp0RSgHN6Lh2H4ASjBlHoitusURy4ayn2Sm55iDPFbdinLHdNJbrl4T_lEY0hR7qPnfOWR80ghjpdXuU6ee0nRy01O52sPPDyKu476gWf_nor9-9uu-SxWXx_LZrEqgkIzFq3mI9pKtdoiKNOWXHvrLRuyXeUYu-6ofaW9R-osE9TUIuiqtYaUVU7rqXj--wZmPpxzOFG-HGqDFtDoX6oCT8I</recordid><startdate>201811</startdate><enddate>201811</enddate><creator>Cohen, Jonathan</creator><creator>Mouaddib, Abdel-Illah</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201811</creationdate><title>Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties</title><author>Cohen, Jonathan ; Mouaddib, Abdel-Illah</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-b3ec7641b367015b2e8d6d6e5a6f49e7ffc3d43dd7af6ea08ab7034b65a161933</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Complexity theory</topic><topic>History</topic><topic>Monte Carlo methods</topic><topic>Monte-Carlo Tree Search</topic><topic>Multiagent systems</topic><topic>Planning</topic><topic>Planning under uncertainty</topic><topic>Task analysis</topic><topic>Team formation</topic><topic>Uncertainty</topic><toplevel>online_resources</toplevel><creatorcontrib>Cohen, Jonathan</creatorcontrib><creatorcontrib>Mouaddib, Abdel-Illah</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cohen, Jonathan</au><au>Mouaddib, Abdel-Illah</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties</atitle><btitle>2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)</btitle><stitle>TAI</stitle><date>2018-11</date><risdate>2018</risdate><spage>458</spage><epage>465</epage><pages>458-465</pages><eissn>2375-0197</eissn><eisbn>9781538674499</eisbn><eisbn>1538674491</eisbn><coden>IEEPAD</coden><abstract>Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimally plan in sequentially decentralized multiagent systems under uncertainty. In this work, we focus our analysis on team formation and reformation in Decentralized POMDPs with a new model coined Team-POMDPs. We present some interesting structural properties of this model inherited from the field of cooperative game theory. We introduce a Monte Carlo-based planning algorithm to learn locally optimal team-reformation policies that tell our agents how to dynamically rearrange in order to better deal with the evolution of the task at hand. By reforming the team during execution, our experiments show that we are able to achieve higher expected long-term rewards than with stationary teams.</abstract><pub>IEEE</pub><doi>10.1109/ICTAI.2018.00077</doi><tpages>8</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2375-0197
ispartof 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), 2018, p.458-465
issn 2375-0197
language eng
recordid cdi_ieee_primary_8576075
source IEEE Xplore All Conference Series
subjects Complexity theory
History
Monte Carlo methods
Monte-Carlo Tree Search
Multiagent systems
Planning
Planning under uncertainty
Task analysis
Team formation
Uncertainty
title Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T16%3A45%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Monte-Carlo%20Planning%20for%20Team%20Re-Formation%20Under%20Uncertainty:%20Model%20and%20Properties&rft.btitle=2018%20IEEE%2030th%20International%20Conference%20on%20Tools%20with%20Artificial%20Intelligence%20(ICTAI)&rft.au=Cohen,%20Jonathan&rft.date=2018-11&rft.spage=458&rft.epage=465&rft.pages=458-465&rft.eissn=2375-0197&rft.coden=IEEPAD&rft_id=info:doi/10.1109/ICTAI.2018.00077&rft.eisbn=9781538674499&rft.eisbn_list=1538674491&rft_dat=%3Cieee_CHZPO%3E8576075%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i175t-b3ec7641b367015b2e8d6d6e5a6f49e7ffc3d43dd7af6ea08ab7034b65a161933%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8576075&rfr_iscdi=true