Loading…
Online concurrent reinforcement learning algorithm to solve two-player zero-sum games for partially unknown nonlinear continuous-time systems
SummaryOnline adaptive optimal control methods based on reinforcement learning algorithms typically need to check for the persistence of excitation condition, which is necessary to be known a priori for convergence of the algorithm. However, this condition is often infeasible to implement or monitor...
Saved in:
Published in: | International journal of adaptive control and signal processing 2015-04, Vol.29 (4), p.473-493 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | SummaryOnline adaptive optimal control methods based on reinforcement learning algorithms typically need to check for the persistence of excitation condition, which is necessary to be known a priori for convergence of the algorithm. However, this condition is often infeasible to implement or monitor online. This paper proposes an online concurrent reinforcement learning algorithm (CRLA) based on neural networks (NNs) to solve the H ∞ control problem of partially unknown continuous‐time systems, in which the need for persistence of excitation condition is relaxed by using the idea of concurrent learning. First, H ∞ control problem is formulated as a two‐player zero‐sum game, and then, online CRLA is employed to obtain the approximation of the optimal value and the Nash equilibrium of the game. The proposed algorithm is implemented on actor–critic–disturbance NN approximator structure to obtain the solution of the Hamilton–Jacobi–Isaacs equation online forward in time. During the implementation of the algorithm, the control input that acts as one player attempts to make the optimal control while the other player, that is, disturbance, tries to make the worst‐case possible disturbance. Novel update laws are derived for adaptation of the critic and actor NN weights. The stability of the closed‐loop system is guaranteed using Lyapunov technique, and the convergence to the Nash solution of the game is obtained. Simulation results show the effectiveness of the proposed method. Copyright © 2014 John Wiley & Sons, Ltd. |
---|---|
ISSN: | 0890-6327 1099-1115 1099-1115 |
DOI: | 10.1002/acs.2485 |