Loading…

Finite-Time Stabilization and Optimal Feedback Control

Finite-time stability involves dynamical systems whose trajectories converge to an equilibrium state in finite time. Since finite-time convergence implies nonuniqueness of system solutions in reverse time, such systems possess non-Lipschitzian dynamics. Sufficient conditions for finite-time stabilit...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automatic control 2016-04, Vol.61 (4), p.1069-1074
Main Authors: Haddad, Wassim M., L'Afflitto, Andrea
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Finite-time stability involves dynamical systems whose trajectories converge to an equilibrium state in finite time. Since finite-time convergence implies nonuniqueness of system solutions in reverse time, such systems possess non-Lipschitzian dynamics. Sufficient conditions for finite-time stability have been developed in the literature using continuous Lyapunov functions. In this technical note, we develop a framework for addressing the problem of optimal nonlinear analysis and feedback control for finite-time stability and finite-time stabilization. Finite-time stability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that satisfies a differential inequality involving fractional powers. This Lyapunov function can clearly be seen to be the solution to a partial differential equation that corresponds to a steady-state form of the Hamilton-Jacobi-Bellman equation, and hence, guaranteeing both finite-time stability and optimality.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2015.2454891