Loading…
Data Driven Control of Interacting Two Tank Hybrid System using Deep Reinforcement Learning
This paper investigates the use of a Deep Neural Network based Reinforcement Learning(RL) algorithm applied to a non-linear system for the design of a controller. It aims to augment the large amounts of data that we possess along with the already known dynamics of the non-linear hybrid tank system f...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper investigates the use of a Deep Neural Network based Reinforcement Learning(RL) algorithm applied to a non-linear system for the design of a controller. It aims to augment the large amounts of data that we possess along with the already known dynamics of the non-linear hybrid tank system for effective control of the liquid level. Control systems represent a non-linear optimization problem, and Machine Learning helps to achieve non-linear optimization using large amounts of data. This document demonstrates the use of Deep Deterministic Policy Gradient (DDPG), an off-policy based actor-critic methodology of reinforcement learning, which is efficient in solving problems where states and actions lie in continuous spaces instead of discrete spaces. The test bench on which RL is being applied is a Multi-Input Multi-Output (MIMO) system called the Interacting Two Tank Hybrid System, with the aim of controlling the liquid levels in the two tanks. In Deep Reinforcement Learning, we are implementing the policy of the agent by means of deep neural networks. The idea behind using the neural network architectures for reinforcement learning is that we want reward signals obtained to strengthen the connection that leads to a good policy. Moreover, these deep neural networks are unique in their ability to represent complex functions if we give them ample amounts of data. |
---|---|
ISSN: | 2642-7354 |
DOI: | 10.1109/ICCCA52192.2021.9666405 |