Loading…
Deep reinforcement learning for computational fluid dynamics on HPC systems
Reinforcement learning (RL) is highly suitable for devising control strategies in the context of dynamical systems. A prominent instance of such a dynamical system is the system of equations governing fluid dynamics. Recent research results indicate that RL-augmented computational fluid dynamics (CF...
Saved in:
Published in: | Journal of computational science 2022-11, Vol.65, p.101884, Article 101884 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c344t-1008dad266dc3a2faf891f8001f9919728eaf3ec19362af68b4b24a98907ac963 |
---|---|
cites | cdi_FETCH-LOGICAL-c344t-1008dad266dc3a2faf891f8001f9919728eaf3ec19362af68b4b24a98907ac963 |
container_end_page | |
container_issue | |
container_start_page | 101884 |
container_title | Journal of computational science |
container_volume | 65 |
creator | Kurz, Marius Offenhäuser, Philipp Viola, Dominic Shcherbakov, Oleksandr Resch, Michael Beck, Andrea |
description | Reinforcement learning (RL) is highly suitable for devising control strategies in the context of dynamical systems. A prominent instance of such a dynamical system is the system of equations governing fluid dynamics. Recent research results indicate that RL-augmented computational fluid dynamics (CFD) solvers can exceed the current state of the art, for example in the field of turbulence modeling. However, while in supervised learning, the training data can be generated a priori in an offline manner, RL requires constant run-time interaction and data exchange with the CFD solver during training. In order to leverage the potential of RL-enhanced CFD, the interaction between the CFD solver and the RL algorithm thus has to be implemented efficiently on high-performance computing (HPC) hardware. To this end, we present Relexi as a scalable RL framework that bridges the gap between machine learning workflows and modern CFD solvers on HPC systems, providing both components with its specialized hardware. Relexi is built with modularity in mind and allows easy integration of various HPC solvers by means of the in-memory data transfer provided by the SmartSim library. Here, we demonstrate that the Relexi framework can scale up to hundreds of parallel environments on thousands of cores. This allows to leverage modern HPC resources to either enable larger problems or faster turnaround times. Finally, we demonstrate the potential of an RL-augmented CFD solver by finding a control strategy for optimal eddy viscosity selection in large eddy simulations.
•A novel scalable reinforcement learning framework for computational fluid dynamics.•Investigation of the framework’s scaling behavior on heterogeneous high-performance computing systems.•Application of the framework to derive data-driven turbulence models for large eddy simulation at scale. |
doi_str_mv | 10.1016/j.jocs.2022.101884 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_jocs_2022_101884</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1877750322002435</els_id><sourcerecordid>S1877750322002435</sourcerecordid><originalsourceid>FETCH-LOGICAL-c344t-1008dad266dc3a2faf891f8001f9919728eaf3ec19362af68b4b24a98907ac963</originalsourceid><addsrcrecordid>eNp9kMtKxDAUhoMoOIzzAq7yAq251DYBNzJeRhzQha5DJjmRlDYZko7Qt7el4tKzOYcfvsPPh9A1JSUltL5pyzaaXDLC2BwIUZ2hFRVNUzS3lJ7_3YRfok3OLZmGCyEpX6HXB4AjTuCDi8lAD2HAHegUfPjCU4RN7I-nQQ8-Bt1h1528xXYMuvcm4xjw7n2L85gH6PMVunC6y7D53Wv0-fT4sd0V-7fnl-39vjC8qoaCEiKstqyureGaOe2mKk4QQp2UVDZMgHYcDJW8ZtrV4lAdWKWlkKTRRtZ8jdjy16SYcwKnjsn3Oo2KEjUbUa2ajajZiFqMTNDdAsHU7NtDUtl4CAasT2AGZaP_D_8B395qJA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep reinforcement learning for computational fluid dynamics on HPC systems</title><source>Elsevier</source><creator>Kurz, Marius ; Offenhäuser, Philipp ; Viola, Dominic ; Shcherbakov, Oleksandr ; Resch, Michael ; Beck, Andrea</creator><creatorcontrib>Kurz, Marius ; Offenhäuser, Philipp ; Viola, Dominic ; Shcherbakov, Oleksandr ; Resch, Michael ; Beck, Andrea</creatorcontrib><description>Reinforcement learning (RL) is highly suitable for devising control strategies in the context of dynamical systems. A prominent instance of such a dynamical system is the system of equations governing fluid dynamics. Recent research results indicate that RL-augmented computational fluid dynamics (CFD) solvers can exceed the current state of the art, for example in the field of turbulence modeling. However, while in supervised learning, the training data can be generated a priori in an offline manner, RL requires constant run-time interaction and data exchange with the CFD solver during training. In order to leverage the potential of RL-enhanced CFD, the interaction between the CFD solver and the RL algorithm thus has to be implemented efficiently on high-performance computing (HPC) hardware. To this end, we present Relexi as a scalable RL framework that bridges the gap between machine learning workflows and modern CFD solvers on HPC systems, providing both components with its specialized hardware. Relexi is built with modularity in mind and allows easy integration of various HPC solvers by means of the in-memory data transfer provided by the SmartSim library. Here, we demonstrate that the Relexi framework can scale up to hundreds of parallel environments on thousands of cores. This allows to leverage modern HPC resources to either enable larger problems or faster turnaround times. Finally, we demonstrate the potential of an RL-augmented CFD solver by finding a control strategy for optimal eddy viscosity selection in large eddy simulations.
•A novel scalable reinforcement learning framework for computational fluid dynamics.•Investigation of the framework’s scaling behavior on heterogeneous high-performance computing systems.•Application of the framework to derive data-driven turbulence models for large eddy simulation at scale.</description><identifier>ISSN: 1877-7503</identifier><identifier>EISSN: 1877-7511</identifier><identifier>DOI: 10.1016/j.jocs.2022.101884</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Computational fluid dynamics ; Deep reinforcement learning ; High-performance computing ; Large eddy simulation ; Turbulence modeling</subject><ispartof>Journal of computational science, 2022-11, Vol.65, p.101884, Article 101884</ispartof><rights>2022 The Authors</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c344t-1008dad266dc3a2faf891f8001f9919728eaf3ec19362af68b4b24a98907ac963</citedby><cites>FETCH-LOGICAL-c344t-1008dad266dc3a2faf891f8001f9919728eaf3ec19362af68b4b24a98907ac963</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Kurz, Marius</creatorcontrib><creatorcontrib>Offenhäuser, Philipp</creatorcontrib><creatorcontrib>Viola, Dominic</creatorcontrib><creatorcontrib>Shcherbakov, Oleksandr</creatorcontrib><creatorcontrib>Resch, Michael</creatorcontrib><creatorcontrib>Beck, Andrea</creatorcontrib><title>Deep reinforcement learning for computational fluid dynamics on HPC systems</title><title>Journal of computational science</title><description>Reinforcement learning (RL) is highly suitable for devising control strategies in the context of dynamical systems. A prominent instance of such a dynamical system is the system of equations governing fluid dynamics. Recent research results indicate that RL-augmented computational fluid dynamics (CFD) solvers can exceed the current state of the art, for example in the field of turbulence modeling. However, while in supervised learning, the training data can be generated a priori in an offline manner, RL requires constant run-time interaction and data exchange with the CFD solver during training. In order to leverage the potential of RL-enhanced CFD, the interaction between the CFD solver and the RL algorithm thus has to be implemented efficiently on high-performance computing (HPC) hardware. To this end, we present Relexi as a scalable RL framework that bridges the gap between machine learning workflows and modern CFD solvers on HPC systems, providing both components with its specialized hardware. Relexi is built with modularity in mind and allows easy integration of various HPC solvers by means of the in-memory data transfer provided by the SmartSim library. Here, we demonstrate that the Relexi framework can scale up to hundreds of parallel environments on thousands of cores. This allows to leverage modern HPC resources to either enable larger problems or faster turnaround times. Finally, we demonstrate the potential of an RL-augmented CFD solver by finding a control strategy for optimal eddy viscosity selection in large eddy simulations.
•A novel scalable reinforcement learning framework for computational fluid dynamics.•Investigation of the framework’s scaling behavior on heterogeneous high-performance computing systems.•Application of the framework to derive data-driven turbulence models for large eddy simulation at scale.</description><subject>Computational fluid dynamics</subject><subject>Deep reinforcement learning</subject><subject>High-performance computing</subject><subject>Large eddy simulation</subject><subject>Turbulence modeling</subject><issn>1877-7503</issn><issn>1877-7511</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kMtKxDAUhoMoOIzzAq7yAq251DYBNzJeRhzQha5DJjmRlDYZko7Qt7el4tKzOYcfvsPPh9A1JSUltL5pyzaaXDLC2BwIUZ2hFRVNUzS3lJ7_3YRfok3OLZmGCyEpX6HXB4AjTuCDi8lAD2HAHegUfPjCU4RN7I-nQQ8-Bt1h1528xXYMuvcm4xjw7n2L85gH6PMVunC6y7D53Wv0-fT4sd0V-7fnl-39vjC8qoaCEiKstqyureGaOe2mKk4QQp2UVDZMgHYcDJW8ZtrV4lAdWKWlkKTRRtZ8jdjy16SYcwKnjsn3Oo2KEjUbUa2ajajZiFqMTNDdAsHU7NtDUtl4CAasT2AGZaP_D_8B395qJA</recordid><startdate>202211</startdate><enddate>202211</enddate><creator>Kurz, Marius</creator><creator>Offenhäuser, Philipp</creator><creator>Viola, Dominic</creator><creator>Shcherbakov, Oleksandr</creator><creator>Resch, Michael</creator><creator>Beck, Andrea</creator><general>Elsevier B.V</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202211</creationdate><title>Deep reinforcement learning for computational fluid dynamics on HPC systems</title><author>Kurz, Marius ; Offenhäuser, Philipp ; Viola, Dominic ; Shcherbakov, Oleksandr ; Resch, Michael ; Beck, Andrea</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c344t-1008dad266dc3a2faf891f8001f9919728eaf3ec19362af68b4b24a98907ac963</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computational fluid dynamics</topic><topic>Deep reinforcement learning</topic><topic>High-performance computing</topic><topic>Large eddy simulation</topic><topic>Turbulence modeling</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kurz, Marius</creatorcontrib><creatorcontrib>Offenhäuser, Philipp</creatorcontrib><creatorcontrib>Viola, Dominic</creatorcontrib><creatorcontrib>Shcherbakov, Oleksandr</creatorcontrib><creatorcontrib>Resch, Michael</creatorcontrib><creatorcontrib>Beck, Andrea</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><jtitle>Journal of computational science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kurz, Marius</au><au>Offenhäuser, Philipp</au><au>Viola, Dominic</au><au>Shcherbakov, Oleksandr</au><au>Resch, Michael</au><au>Beck, Andrea</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep reinforcement learning for computational fluid dynamics on HPC systems</atitle><jtitle>Journal of computational science</jtitle><date>2022-11</date><risdate>2022</risdate><volume>65</volume><spage>101884</spage><pages>101884-</pages><artnum>101884</artnum><issn>1877-7503</issn><eissn>1877-7511</eissn><abstract>Reinforcement learning (RL) is highly suitable for devising control strategies in the context of dynamical systems. A prominent instance of such a dynamical system is the system of equations governing fluid dynamics. Recent research results indicate that RL-augmented computational fluid dynamics (CFD) solvers can exceed the current state of the art, for example in the field of turbulence modeling. However, while in supervised learning, the training data can be generated a priori in an offline manner, RL requires constant run-time interaction and data exchange with the CFD solver during training. In order to leverage the potential of RL-enhanced CFD, the interaction between the CFD solver and the RL algorithm thus has to be implemented efficiently on high-performance computing (HPC) hardware. To this end, we present Relexi as a scalable RL framework that bridges the gap between machine learning workflows and modern CFD solvers on HPC systems, providing both components with its specialized hardware. Relexi is built with modularity in mind and allows easy integration of various HPC solvers by means of the in-memory data transfer provided by the SmartSim library. Here, we demonstrate that the Relexi framework can scale up to hundreds of parallel environments on thousands of cores. This allows to leverage modern HPC resources to either enable larger problems or faster turnaround times. Finally, we demonstrate the potential of an RL-augmented CFD solver by finding a control strategy for optimal eddy viscosity selection in large eddy simulations.
•A novel scalable reinforcement learning framework for computational fluid dynamics.•Investigation of the framework’s scaling behavior on heterogeneous high-performance computing systems.•Application of the framework to derive data-driven turbulence models for large eddy simulation at scale.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.jocs.2022.101884</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1877-7503 |
ispartof | Journal of computational science, 2022-11, Vol.65, p.101884, Article 101884 |
issn | 1877-7503 1877-7511 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_jocs_2022_101884 |
source | Elsevier |
subjects | Computational fluid dynamics Deep reinforcement learning High-performance computing Large eddy simulation Turbulence modeling |
title | Deep reinforcement learning for computational fluid dynamics on HPC systems |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T23%3A22%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20reinforcement%20learning%20for%20computational%20fluid%20dynamics%20on%20HPC%20systems&rft.jtitle=Journal%20of%20computational%20science&rft.au=Kurz,%20Marius&rft.date=2022-11&rft.volume=65&rft.spage=101884&rft.pages=101884-&rft.artnum=101884&rft.issn=1877-7503&rft.eissn=1877-7511&rft_id=info:doi/10.1016/j.jocs.2022.101884&rft_dat=%3Celsevier_cross%3ES1877750322002435%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c344t-1008dad266dc3a2faf891f8001f9919728eaf3ec19362af68b4b24a98907ac963%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |