Loading…

A Reinforcement Learning-based Adaptive Time-Delay Control and Its Application to Robot Manipulators

This study proposes an innovative reinforcement learning-based time-delay control (RL-TDC) scheme to provide more intelligent, timely, and aggressive control efforts than the existing simple-structured adaptive time-delay controls (ATDCs) that are well-known for achieving good tracking performances...

Full description

Saved in:
Bibliographic Details
Main Authors: Baek, Seungmin, Baek, Jongchan, Choi, Jinsuk, Han, Soohee
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study proposes an innovative reinforcement learning-based time-delay control (RL-TDC) scheme to provide more intelligent, timely, and aggressive control efforts than the existing simple-structured adaptive time-delay controls (ATDCs) that are well-known for achieving good tracking performances in practical applications. The proposed control scheme adopts a state-of-the-art RL algorithm called soft actor critic (SAC) with which the inertia gain matrix of the time-delay control is adjusted toward maximizing the expected return obtained from tracking errors over all the future time periods. By learning the dynamics of the robot manipulator with a data-driven approach, and capturing its intractable and complicated phenomena, the proposed RL-TDC is trained to effectively suppress the inherent time delay estimation (TDE) errors arising from time delay control, thereby ensuring the best tracking performance within the given control capacity limits. As expected, it is demonstrated through simulation with a robot manipulator that the proposed RL-TDC avoids conservative small control actions when large ones are required, for maximizing the tracking performance. It is observed that the stability condition is fully exploited to provide more effective control actions.
ISSN:2378-5861
DOI:10.23919/ACC53348.2022.9867835