Loading…
Connection among Stochastic Hamilton–Jacobi–Bellman Equation, Path-Integral, and Koopman Operator on Nonlinear Stochastic Optimal Control
The path-integral control, which stems from the stochastic Hamilton–Jacobi–Bellman equation, is one of the methods to control stochastic nonlinear systems. This paper gives a new insight into nonlinear stochastic optimal control problems from the perspective of Koopman operators. When a finite-dimen...
Saved in:
Published in: | Journal of the Physical Society of Japan 2021-10, Vol.90 (10), p.104802 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The path-integral control, which stems from the stochastic Hamilton–Jacobi–Bellman equation, is one of the methods to control stochastic nonlinear systems. This paper gives a new insight into nonlinear stochastic optimal control problems from the perspective of Koopman operators. When a finite-dimensional dynamical system is nonlinear, the corresponding Koopman operator is linear. Although the Koopman operator is infinite-dimensional, adequate approximation makes it tractable and useful in some discussions and applications. Employing the Koopman operator perspective, it is clarified that only a specific type of observable is enough to be focused on in the control problem. This fact becomes easier to understand via path-integral control. Furthermore, the focus on the specific observable leads to a natural power-series expansion; coupled ordinary differential equations for discrete-state space systems are derived. A demonstration for nonlinear stochastic optimal control shows that the derived equations work well. |
---|---|
ISSN: | 0031-9015 1347-4073 |
DOI: | 10.7566/JPSJ.90.104802 |