Loading…
Process control via artificial neural networks and reinforcement learning
In the training of artificial neural networks, reinforcement learning substitutes a qualitative binary target of “success” or “failure” for the quantitative error criterion of supervised learning. By this method of learning control can be established for the special case of operation in which no obj...
Saved in:
Published in: | Computers & chemical engineering 1992, Vol.16 (4), p.241-251 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In the training of artificial neural networks, reinforcement learning substitutes a qualitative binary target of “success” or “failure” for the quantitative error criterion of supervised learning. By this method of learning control can be established for the special case of operation in which no objective function exists. If no model whatsoever exists of the dynamics of a chemical process, it still may be possible to train an artificial neural network to control the process much as a human being would learn by trial and error.
We describe a network architecture for process control, and explain quantitatively how the weights on the connections in the network can be adjusted to yield the desired control action. An example of a nonlinear CSTR is used to illustrate the performance of the proposed net and compare it with that of a tuned PID controller. The net can be trained to meet almost any criteria selected for control as long as the criteria can be expressed in the form of inequality constraints, but requires extensive, and perhaps excessively long, training on a serial computer to do so. |
---|---|
ISSN: | 0098-1354 1873-4375 |
DOI: | 10.1016/0098-1354(92)80045-B |