Loading…
Multi-objective reinforcement learning-based approach for pressurized water reactor optimization
A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself fr...
Saved in:
Published in: | Annals of nuclear energy 2024-09, Vol.205, p.110582, Article 110582 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c257t-c668e238fff9537411dddf02ccaa5211198bc72bca834e75616d3e5ef68edd253 |
container_end_page | |
container_issue | |
container_start_page | 110582 |
container_title | Annals of nuclear energy |
container_volume | 205 |
creator | Seurin, Paul Shirvan, Koroush |
description | A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself from traditional policy-based multi-objective Reinforcement Learning methods by learning a single policy, eliminating the need for multiple neural networks to independently solve simpler sub-problems. Several versions inspired from deep learning and evolutionary techniques have been crafted, catering to both unconstrained and constrained problem domains. Curriculum Learning (CL) is harnessed to effectively manage constraints in these versions. PEARL’s performance is first evaluated on classical multi-objective benchmarks. Additionally, it is tested on two practical PWR core Loading Pattern (LP) optimization problems to showcase its real-world applicability. The first problem involves optimizing the Cycle length (LC) and the rod-integrated peaking factor (FΔh) as the primary objectives, while the second problem incorporates the mean average enrichment as an additional objective. Furthermore, PEARL addresses three types of constraints related to boron concentration (Cb), peak pin burnup (Bumax), and peak pin power (Fq). The results are systematically compared against conventional approaches from stochastic optimization. Notably, PEARL, specifically the PEAL-NdS ( ▪ ) variant, efficiently uncovers a Pareto front without necessitating additional efforts from the algorithm designer, as opposed to a single optimization with scaled objectives. It also outperforms the classical approach across multiple performance metrics, including the Hyper-volume. Future works will encompass a sensitivity analysis of hyper-parameters with statistical analysis to optimize the application of PEARL and extend it to more intricate problems. |
doi_str_mv | 10.1016/j.anucene.2024.110582 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_anucene_2024_110582</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0306454924002457</els_id><sourcerecordid>S0306454924002457</sourcerecordid><originalsourceid>FETCH-LOGICAL-c257t-c668e238fff9537411dddf02ccaa5211198bc72bca834e75616d3e5ef68edd253</originalsourceid><addsrcrecordid>eNqFkMtOwzAQRS0EEqXwCUj5gQSPEyfuCqGKlwRiA2vj2GNw1DqR7RbRr8dVu2c1izvn6uoQcg20AgrtzVApv9HosWKUNRUA5YKdkBmIri4ZUHpKZrSmbdnwZnFOLmIcKAUmmmZGPl83q-TKsR9QJ7fFIqDzdgwa1-hTsUIVvPNfZa8imkJNUxiV_i7yRzEFjHET3C4HPyphyKzSKSfjlNza7VRyo78kZ1atIl4d75x8PNy_L5_Kl7fH5-XdS6kZ71Kp21Ygq4W1dsHrrgEwxljKtFaKMwBYiF53rNdK1A12vIXW1MjRZswYxus54YdeHcYYA1o5BbdW4VcClXtNcpBHTXKvSR40Ze72wGEet3UYZNQOvUbjQlYizej-afgDj1d2sA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Multi-objective reinforcement learning-based approach for pressurized water reactor optimization</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Seurin, Paul ; Shirvan, Koroush</creator><creatorcontrib>Seurin, Paul ; Shirvan, Koroush</creatorcontrib><description>A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself from traditional policy-based multi-objective Reinforcement Learning methods by learning a single policy, eliminating the need for multiple neural networks to independently solve simpler sub-problems. Several versions inspired from deep learning and evolutionary techniques have been crafted, catering to both unconstrained and constrained problem domains. Curriculum Learning (CL) is harnessed to effectively manage constraints in these versions. PEARL’s performance is first evaluated on classical multi-objective benchmarks. Additionally, it is tested on two practical PWR core Loading Pattern (LP) optimization problems to showcase its real-world applicability. The first problem involves optimizing the Cycle length (LC) and the rod-integrated peaking factor (FΔh) as the primary objectives, while the second problem incorporates the mean average enrichment as an additional objective. Furthermore, PEARL addresses three types of constraints related to boron concentration (Cb), peak pin burnup (Bumax), and peak pin power (Fq). The results are systematically compared against conventional approaches from stochastic optimization. Notably, PEARL, specifically the PEAL-NdS ( ▪ ) variant, efficiently uncovers a Pareto front without necessitating additional efforts from the algorithm designer, as opposed to a single optimization with scaled objectives. It also outperforms the classical approach across multiple performance metrics, including the Hyper-volume. Future works will encompass a sensitivity analysis of hyper-parameters with statistical analysis to optimize the application of PEARL and extend it to more intricate problems.</description><identifier>ISSN: 0306-4549</identifier><identifier>EISSN: 1873-2100</identifier><identifier>DOI: 10.1016/j.anucene.2024.110582</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Curriculum learning ; Multi-objective optimization ; PEARL ; PWR loading pattern optimization ; Reinforcement learning</subject><ispartof>Annals of nuclear energy, 2024-09, Vol.205, p.110582, Article 110582</ispartof><rights>2024 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c257t-c668e238fff9537411dddf02ccaa5211198bc72bca834e75616d3e5ef68edd253</cites><orcidid>0000-0002-5940-7695 ; 0000-0002-4698-7870</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Seurin, Paul</creatorcontrib><creatorcontrib>Shirvan, Koroush</creatorcontrib><title>Multi-objective reinforcement learning-based approach for pressurized water reactor optimization</title><title>Annals of nuclear energy</title><description>A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself from traditional policy-based multi-objective Reinforcement Learning methods by learning a single policy, eliminating the need for multiple neural networks to independently solve simpler sub-problems. Several versions inspired from deep learning and evolutionary techniques have been crafted, catering to both unconstrained and constrained problem domains. Curriculum Learning (CL) is harnessed to effectively manage constraints in these versions. PEARL’s performance is first evaluated on classical multi-objective benchmarks. Additionally, it is tested on two practical PWR core Loading Pattern (LP) optimization problems to showcase its real-world applicability. The first problem involves optimizing the Cycle length (LC) and the rod-integrated peaking factor (FΔh) as the primary objectives, while the second problem incorporates the mean average enrichment as an additional objective. Furthermore, PEARL addresses three types of constraints related to boron concentration (Cb), peak pin burnup (Bumax), and peak pin power (Fq). The results are systematically compared against conventional approaches from stochastic optimization. Notably, PEARL, specifically the PEAL-NdS ( ▪ ) variant, efficiently uncovers a Pareto front without necessitating additional efforts from the algorithm designer, as opposed to a single optimization with scaled objectives. It also outperforms the classical approach across multiple performance metrics, including the Hyper-volume. Future works will encompass a sensitivity analysis of hyper-parameters with statistical analysis to optimize the application of PEARL and extend it to more intricate problems.</description><subject>Curriculum learning</subject><subject>Multi-objective optimization</subject><subject>PEARL</subject><subject>PWR loading pattern optimization</subject><subject>Reinforcement learning</subject><issn>0306-4549</issn><issn>1873-2100</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNqFkMtOwzAQRS0EEqXwCUj5gQSPEyfuCqGKlwRiA2vj2GNw1DqR7RbRr8dVu2c1izvn6uoQcg20AgrtzVApv9HosWKUNRUA5YKdkBmIri4ZUHpKZrSmbdnwZnFOLmIcKAUmmmZGPl83q-TKsR9QJ7fFIqDzdgwa1-hTsUIVvPNfZa8imkJNUxiV_i7yRzEFjHET3C4HPyphyKzSKSfjlNza7VRyo78kZ1atIl4d75x8PNy_L5_Kl7fH5-XdS6kZ71Kp21Ygq4W1dsHrrgEwxljKtFaKMwBYiF53rNdK1A12vIXW1MjRZswYxus54YdeHcYYA1o5BbdW4VcClXtNcpBHTXKvSR40Ze72wGEet3UYZNQOvUbjQlYizej-afgDj1d2sA</recordid><startdate>20240915</startdate><enddate>20240915</enddate><creator>Seurin, Paul</creator><creator>Shirvan, Koroush</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-5940-7695</orcidid><orcidid>https://orcid.org/0000-0002-4698-7870</orcidid></search><sort><creationdate>20240915</creationdate><title>Multi-objective reinforcement learning-based approach for pressurized water reactor optimization</title><author>Seurin, Paul ; Shirvan, Koroush</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c257t-c668e238fff9537411dddf02ccaa5211198bc72bca834e75616d3e5ef68edd253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Curriculum learning</topic><topic>Multi-objective optimization</topic><topic>PEARL</topic><topic>PWR loading pattern optimization</topic><topic>Reinforcement learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Seurin, Paul</creatorcontrib><creatorcontrib>Shirvan, Koroush</creatorcontrib><collection>CrossRef</collection><jtitle>Annals of nuclear energy</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Seurin, Paul</au><au>Shirvan, Koroush</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-objective reinforcement learning-based approach for pressurized water reactor optimization</atitle><jtitle>Annals of nuclear energy</jtitle><date>2024-09-15</date><risdate>2024</risdate><volume>205</volume><spage>110582</spage><pages>110582-</pages><artnum>110582</artnum><issn>0306-4549</issn><eissn>1873-2100</eissn><abstract>A novel method, the Pareto Envelope Augmented with Reinforcement Learning (PEARL), has been developed to address the challenges posed by multi-objective problems, particularly in the field of engineering where the evaluation of candidate solutions can be time-consuming. PEARL distinguishes itself from traditional policy-based multi-objective Reinforcement Learning methods by learning a single policy, eliminating the need for multiple neural networks to independently solve simpler sub-problems. Several versions inspired from deep learning and evolutionary techniques have been crafted, catering to both unconstrained and constrained problem domains. Curriculum Learning (CL) is harnessed to effectively manage constraints in these versions. PEARL’s performance is first evaluated on classical multi-objective benchmarks. Additionally, it is tested on two practical PWR core Loading Pattern (LP) optimization problems to showcase its real-world applicability. The first problem involves optimizing the Cycle length (LC) and the rod-integrated peaking factor (FΔh) as the primary objectives, while the second problem incorporates the mean average enrichment as an additional objective. Furthermore, PEARL addresses three types of constraints related to boron concentration (Cb), peak pin burnup (Bumax), and peak pin power (Fq). The results are systematically compared against conventional approaches from stochastic optimization. Notably, PEARL, specifically the PEAL-NdS ( ▪ ) variant, efficiently uncovers a Pareto front without necessitating additional efforts from the algorithm designer, as opposed to a single optimization with scaled objectives. It also outperforms the classical approach across multiple performance metrics, including the Hyper-volume. Future works will encompass a sensitivity analysis of hyper-parameters with statistical analysis to optimize the application of PEARL and extend it to more intricate problems.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.anucene.2024.110582</doi><orcidid>https://orcid.org/0000-0002-5940-7695</orcidid><orcidid>https://orcid.org/0000-0002-4698-7870</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0306-4549 |
ispartof | Annals of nuclear energy, 2024-09, Vol.205, p.110582, Article 110582 |
issn | 0306-4549 1873-2100 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_anucene_2024_110582 |
source | ScienceDirect Freedom Collection 2022-2024 |
subjects | Curriculum learning Multi-objective optimization PEARL PWR loading pattern optimization Reinforcement learning |
title | Multi-objective reinforcement learning-based approach for pressurized water reactor optimization |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T04%3A29%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-objective%20reinforcement%20learning-based%20approach%20for%20pressurized%20water%20reactor%20optimization&rft.jtitle=Annals%20of%20nuclear%20energy&rft.au=Seurin,%20Paul&rft.date=2024-09-15&rft.volume=205&rft.spage=110582&rft.pages=110582-&rft.artnum=110582&rft.issn=0306-4549&rft.eissn=1873-2100&rft_id=info:doi/10.1016/j.anucene.2024.110582&rft_dat=%3Celsevier_cross%3ES0306454924002457%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c257t-c668e238fff9537411dddf02ccaa5211198bc72bca834e75616d3e5ef68edd253%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |