Loading…

Parallel implementation and performance of super-resolution generative adversarial network turbulence models for large-eddy simulation

Super-resolution (SR) generative adversarial networks (GANs) are promising for turbulence closure in large-eddy simulation (LES) due to their ability to accurately reconstruct high-resolution data from low-resolution fields. Current model training and inference strategies are not sufficiently mature...

Full description

Saved in:
Bibliographic Details
Published in:Computers & fluids 2025-02, Vol.288, p.106498, Article 106498
Main Authors: Nista, Ludovico, Schumann, Christoph D.K., Petkov, Peicho, Pavlov, Valentin, Grenga, Temistocle, MacArt, Jonathan F., Attili, Antonio, Markov, Stoyan, Pitsch, Heinz
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Super-resolution (SR) generative adversarial networks (GANs) are promising for turbulence closure in large-eddy simulation (LES) due to their ability to accurately reconstruct high-resolution data from low-resolution fields. Current model training and inference strategies are not sufficiently mature for large-scale, distributed calculations due to the computational demands and often unstable training of SR-GANs, which limits the exploration of improved model structures, training strategies, and loss-function definitions. Integrating SR-GANs into LES solvers for inference-coupled simulations is also necessary to assess their a posteriori accuracy, stability, and cost. We investigate parallelization strategies for SR-GAN training and inference-coupled LES, focusing on computational performance and reconstruction accuracy. We examine distributed data-parallel training strategies for hybrid CPU–GPU node architectures and the associated influence of low-/high-resolution subbox size, global batch size, and discriminator accuracy. Accurate predictions require training subboxes that are sufficiently large relative to the Kolmogorov length scale. Care should be placed on the coupled effect of training batch size, learning rate, number of training subboxes, and discriminator’s learning capabilities. We introduce a data-parallel SR-GAN training and inference library for heterogeneous architectures that enables exchange between the LES solver and SR-GAN inference at runtime. We investigate the predictive accuracy and computational performance of this arrangement with particular focus on the overlap (halo) size required for accurate SR reconstruction. Similarly, a posteriori parallel scaling for efficient inference-coupled LES is constrained by the SR subdomain size, GPU utilization, and reconstruction accuracy. Based on these findings, we establish guidelines and best practices to optimize resource utilization and parallel acceleration of SR-GAN turbulence model training and inference-coupled LES calculations while maintaining predictive accuracy. •Accelerated SR-GAN training for turbulence closure using distributed data-parallelism.•Accurate a priori turbulence predictions require training on large subboxes.•Key factors in SR-GAN training: batch size, learning rate, subboxes, and its discriminator.•Integration of LES solvers with SR-GAN inference through SuperLES library is introduced.•Provided guidelines for GAN-based SR training and SR-LES closure.
ISSN:0045-7930
DOI:10.1016/j.compfluid.2024.106498