Loading…
Planning under Uncertainty for Robotic Tasks with Mixed Observability
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been applied to various robotic tasks. However, solving POMDPs exactly is computationally intractable. A major challenge is to sc...
Saved in:
Published in: | The International journal of robotics research 2010-07, Vol.29 (8), p.1053-1068 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been applied to various robotic tasks. However, solving POMDPs exactly is computationally intractable. A major challenge is to scale up POMDP algorithms for complex robotic tasks. Robotic systems often have mixed observability : even when a robot’s state is not fully observable, some components of the state may still be so. We use a factored model to represent separately the fully and partially observable components of a robot’s state and derive a compact lower-dimensional representation of its belief space. This factored representation can be combined with any point-based algorithm to compute approximate POMDP solutions. Experimental results show that on standard test problems, our approach improves the performance of a leading point-based POMDP algorithm by many times. |
---|---|
ISSN: | 0278-3649 1741-3176 |
DOI: | 10.1177/0278364910369861 |