Loading…
Data-Driven Direct Adaptive Risk-Sensitive Control of Stochastic Systems
The authors propose a data-driven direct adaptive control law based on the adaptive dynamic programming (ADP) algorithm for continuous-time stochastic linear systems with partially unknown system dynamics and infinite horizon quadratic risk-sensitive indices. The authors use online data of the syste...
Saved in:
Published in: | Journal of systems science and complexity 2024, Vol.37 (4), p.1446-1469 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The authors propose a data-driven direct adaptive control law based on the adaptive dynamic programming (ADP) algorithm for continuous-time stochastic linear systems with partially unknown system dynamics and infinite horizon quadratic risk-sensitive indices. The authors use online data of the system to iteratively solve the generalized algebraic Riccati equation (GARE) and to learn the optimal control law directly. For the case with measurable system noises, the authors show that the adaptive control law approximates the optimal control law as time goes on. For the case with unmeasurable system noises, the authors use the least-square solution calculated only from the measurable data instead of the real solution of the regression equation to iteratively solve the GARE. The authors also study the influences of the intensity of the system noises, the intensity of the exploration noises, the initial iterative matrix, and the sampling period on the convergence of the ADP algorithm. Finally, the authors present two numerical simulation examples to demonstrate the effectiveness of the proposed algorithms. |
---|---|
ISSN: | 1009-6124 1559-7067 |
DOI: | 10.1007/s11424-024-2421-z |