Loading…

Multi-objective reinforcement learning-based approach for pressurized water reactor optimization

A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself fr...

Full description

Saved in:
Bibliographic Details
Published in:Annals of nuclear energy 2024-09, Vol.205, p.110582, Article 110582
Main Authors: Seurin, Paul, Shirvan, Koroush
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself from traditional policy-based multi-objective Reinforcement Learning methods by learning a single policy, eliminating the need for multiple neural networks to independently solve simpler sub-problems. Several versions inspired from deep learning and evolutionary techniques have been crafted, catering to both unconstrained and constrained problem domains. Curriculum Learning (CL) is harnessed to effectively manage constraints in these versions. PEARL’s performance is first evaluated on classical multi-objective benchmarks. Additionally, it is tested on two practical PWR core Loading Pattern (LP) optimization problems to showcase its real-world applicability. The first problem involves optimizing the Cycle length (LC) and the rod-integrated peaking factor (FΔh) as the primary objectives, while the second problem incorporates the mean average enrichment as an additional objective. Furthermore, PEARL addresses three types of constraints related to boron concentration (Cb), peak pin burnup (Bumax), and peak pin power (Fq). The results are systematically compared against conventional approaches from stochastic optimization. Notably, PEARL, specifically the PEAL-NdS ( ▪ ) variant, efficiently uncovers a Pareto front without necessitating additional efforts from the algorithm designer, as opposed to a single optimization with scaled objectives. It also outperforms the classical approach across multiple performance metrics, including the Hyper-volume. Future works will encompass a sensitivity analysis of hyper-parameters with statistical analysis to optimize the application of PEARL and extend it to more intricate problems.
ISSN:0306-4549
1873-2100
DOI:10.1016/j.anucene.2024.110582