Loading…
Monte-Carlo Planning for Team Re-Formation Under Uncertainty: Model and Properties
Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimal...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Teamwork in decentralized systems plays a central role in recent artificial intelligence advances, such as in applications to disaster response. Decentralized partially observable Markov decision processes (Dec-POMDPs) have emerged as the de facto standard mathematical framework to study and optimally plan in sequentially decentralized multiagent systems under uncertainty. In this work, we focus our analysis on team formation and reformation in Decentralized POMDPs with a new model coined Team-POMDPs. We present some interesting structural properties of this model inherited from the field of cooperative game theory. We introduce a Monte Carlo-based planning algorithm to learn locally optimal team-reformation policies that tell our agents how to dynamically rearrange in order to better deal with the evolution of the task at hand. By reforming the team during execution, our experiments show that we are able to achieve higher expected long-term rewards than with stationary teams. |
---|---|
ISSN: | 2375-0197 |
DOI: | 10.1109/ICTAI.2018.00077 |