Loading…

Model-based deep reinforcement learning for accelerated learning from flow simulations

In recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. Employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-04
Main Authors: Weiner, Andre, Geise, Janis
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Weiner, Andre
Geise, Janis
description In recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. Employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control applications, and allows to gain a deep understanding of the control mechanisms. While reinforcement learning has been applied successfully in a number of rather simple flow control benchmarks, a major bottleneck toward real-world applications is the high computational cost and turnaround time of flow simulations. In this contribution, we demonstrate the benefits of model-based reinforcement learning for flow control applications. Specifically, we optimize the policy by alternating between trajectories sampled from flow simulations and trajectories sampled from an ensemble of environment models. The model-based learning reduces the overall training time by up to \(85\%\) for the fluidic pinball test case. Even larger savings are expected for more demanding flow simulations.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2932316025</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2932316025</sourcerecordid><originalsourceid>FETCH-proquest_journals_29323160253</originalsourceid><addsrcrecordid>eNqNirEKwjAUAIMgWLT_EHAupC9t1VkUFzdxLbF9lZQ0r-al-Pt2EFydDu5uIRLQOs_2BcBKpMy9UgqqHZSlTsT9Si267GEYW9kijjKg9R2FBgf0UTo0wVv_lLOSpmnQYTBxfn8h0CA7R2_JdpiciZY8b8SyM44x_XIttufT7XjJxkCvCTnWPU3Bz6mGgwadVwpK_d_1AV4bQeE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2932316025</pqid></control><display><type>article</type><title>Model-based deep reinforcement learning for accelerated learning from flow simulations</title><source>Publicly Available Content Database</source><creator>Weiner, Andre ; Geise, Janis</creator><creatorcontrib>Weiner, Andre ; Geise, Janis</creatorcontrib><description>In recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. Employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control applications, and allows to gain a deep understanding of the control mechanisms. While reinforcement learning has been applied successfully in a number of rather simple flow control benchmarks, a major bottleneck toward real-world applications is the high computational cost and turnaround time of flow simulations. In this contribution, we demonstrate the benefits of model-based reinforcement learning for flow control applications. Specifically, we optimize the policy by alternating between trajectories sampled from flow simulations and trajectories sampled from an ensemble of environment models. The model-based learning reduces the overall training time by up to \(85\%\) for the fluidic pinball test case. Even larger savings are expected for more demanding flow simulations.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Closed loops ; Deep learning ; Environment models ; Flow control ; Flow simulation ; Safety critical</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2932316025?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,37011,44589</link.rule.ids></links><search><creatorcontrib>Weiner, Andre</creatorcontrib><creatorcontrib>Geise, Janis</creatorcontrib><title>Model-based deep reinforcement learning for accelerated learning from flow simulations</title><title>arXiv.org</title><description>In recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. Employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control applications, and allows to gain a deep understanding of the control mechanisms. While reinforcement learning has been applied successfully in a number of rather simple flow control benchmarks, a major bottleneck toward real-world applications is the high computational cost and turnaround time of flow simulations. In this contribution, we demonstrate the benefits of model-based reinforcement learning for flow control applications. Specifically, we optimize the policy by alternating between trajectories sampled from flow simulations and trajectories sampled from an ensemble of environment models. The model-based learning reduces the overall training time by up to \(85\%\) for the fluidic pinball test case. Even larger savings are expected for more demanding flow simulations.</description><subject>Closed loops</subject><subject>Deep learning</subject><subject>Environment models</subject><subject>Flow control</subject><subject>Flow simulation</subject><subject>Safety critical</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNirEKwjAUAIMgWLT_EHAupC9t1VkUFzdxLbF9lZQ0r-al-Pt2EFydDu5uIRLQOs_2BcBKpMy9UgqqHZSlTsT9Si267GEYW9kijjKg9R2FBgf0UTo0wVv_lLOSpmnQYTBxfn8h0CA7R2_JdpiciZY8b8SyM44x_XIttufT7XjJxkCvCTnWPU3Bz6mGgwadVwpK_d_1AV4bQeE</recordid><startdate>20240410</startdate><enddate>20240410</enddate><creator>Weiner, Andre</creator><creator>Geise, Janis</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240410</creationdate><title>Model-based deep reinforcement learning for accelerated learning from flow simulations</title><author>Weiner, Andre ; Geise, Janis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29323160253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Closed loops</topic><topic>Deep learning</topic><topic>Environment models</topic><topic>Flow control</topic><topic>Flow simulation</topic><topic>Safety critical</topic><toplevel>online_resources</toplevel><creatorcontrib>Weiner, Andre</creatorcontrib><creatorcontrib>Geise, Janis</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Weiner, Andre</au><au>Geise, Janis</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Model-based deep reinforcement learning for accelerated learning from flow simulations</atitle><jtitle>arXiv.org</jtitle><date>2024-04-10</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In recent years, deep reinforcement learning has emerged as a technique to solve closed-loop flow control problems. Employing simulation-based environments in reinforcement learning enables a priori end-to-end optimization of the control system, provides a virtual testbed for safety-critical control applications, and allows to gain a deep understanding of the control mechanisms. While reinforcement learning has been applied successfully in a number of rather simple flow control benchmarks, a major bottleneck toward real-world applications is the high computational cost and turnaround time of flow simulations. In this contribution, we demonstrate the benefits of model-based reinforcement learning for flow control applications. Specifically, we optimize the policy by alternating between trajectories sampled from flow simulations and trajectories sampled from an ensemble of environment models. The model-based learning reduces the overall training time by up to \(85\%\) for the fluidic pinball test case. Even larger savings are expected for more demanding flow simulations.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2932316025
source Publicly Available Content Database
subjects Closed loops
Deep learning
Environment models
Flow control
Flow simulation
Safety critical
title Model-based deep reinforcement learning for accelerated learning from flow simulations
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T14%3A49%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Model-based%20deep%20reinforcement%20learning%20for%20accelerated%20learning%20from%20flow%20simulations&rft.jtitle=arXiv.org&rft.au=Weiner,%20Andre&rft.date=2024-04-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2932316025%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29323160253%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2932316025&rft_id=info:pmid/&rfr_iscdi=true