Loading…
A deep reinforcement learning based multi-objective optimization for the scheduling of oxygen production system in integrated iron and steel plants
•A novel multi-objective optimization model for oxygen production systems.•A novel DRL-MOEA method for improving the solving efficiency of MOEA.•A surrogate model for training agents of DRL-MOEA in a short time.•Analyses of operation cost and operation mode transition in tradeoff terms.•Analyses of...
Saved in:
Published in: | Applied energy 2023-09, Vol.345, p.121332, Article 121332 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •A novel multi-objective optimization model for oxygen production systems.•A novel DRL-MOEA method for improving the solving efficiency of MOEA.•A surrogate model for training agents of DRL-MOEA in a short time.•Analyses of operation cost and operation mode transition in tradeoff terms.•Analyses of schedules in flexible demand and various electricity prices schemes.
The oxygen production system in integrated iron and steel plants is a highly energy-intensive sector producing gaseous oxygen for the manufacturing processes. This study investigates an oxygen production system with cryogenic air separation units (ASU), vaporizers, and liquefiers under frequent demand-changing scenarios and various electricity price contracts. A multi-objective optimization model is established to minimize the total operating cost to reduce energy consumption and simultaneously minimize switching times of operating modes to pursue operational stability. The proposed model not only schedules operation modes of ASUs and on/off modes of vaporizers and liquefiers but also determines the production level of these units. To solve the problem, a multi-objective evolutionary algorithm (MOEA) is proposed, in which the proximal policy optimization, one of deep reinforcement learning (DRL) methods, is incorporated to adaptively select the mating individuals and determine the crossover ratio at each evolutionary iteration. Besides, a surrogate model for optimizing the total operating cost is presented to accelerate the training process of the proposed deep reinforcement learning based multi-objective evolutionary algorithm (DRL-MOEA). The performance of the proposed algorithms is demonstrated using practical instances, and experiment results show that it saves the total operating cost up to 0.86% and decreases the operating modes switches as much as 14.41%, compared to MOEA. The model offers the on-site manager solutions to coordinate the supply and demand with a good trade-off between reducing overall energy consumption and pursuing streamlined operating conditions. |
---|---|
ISSN: | 0306-2619 1872-9118 |
DOI: | 10.1016/j.apenergy.2023.121332 |