Loading…

A non‐linear non‐intrusive reduced order model of fluid flow by auto‐encoder and self‐attention deep learning methods

This paper presents a new nonlinear non‐intrusive reduced‐order model (NL‐NIROM) that outperforms traditional proper orthogonal decomposition (POD)‐based reduced order model (ROM). This improvement is achieved through the use of auto‐encoder (AE) and self‐attention based deep learning methods. The n...

Full description

Saved in:
Bibliographic Details
Published in:International journal for numerical methods in engineering 2023-07, Vol.124 (13), p.3087-3111
Main Authors: Fu, R., Xiao, D., Navon, I.M., Fang, F., Yang, L., Wang, C., Cheng, S.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents a new nonlinear non‐intrusive reduced‐order model (NL‐NIROM) that outperforms traditional proper orthogonal decomposition (POD)‐based reduced order model (ROM). This improvement is achieved through the use of auto‐encoder (AE) and self‐attention based deep learning methods. The novelty of this work is that it uses stacked auto‐encoder (SAE) network to project the original high‐dimensional dynamical systems onto a low dimensional nonlinear subspace and predict fluid dynamics using an self‐attention based deep learning method. This paper introduces a new model reduction neural network architecture for fluid flow problem, as well as, a linear non‐intrusive reduced order model (L‐NIROM) based on POD and self‐attention mechanism. In the NL‐NIROM, the SAE network compresses high‐dimensional physical information into several much smaller sized representations in a reduced latent space. These representations are expressed by a number of codes in the middle layer of SAE neural network. Then, those codes at different time levels are trained to construct a set of hyper‐surfaces using self‐attention based deep learning methods. The inputs of the self‐attention based network are previous time levels' codes and the outputs of the network are current time levels' codes. The codes at current time level are then projected back to the original full space by the decoder layers in the SAE network. The capability of the new model, NL‐NIROM, is demonstrated through two test cases: flow past a cylinder, and a lock exchange. The results show that the NL‐NIROM is more accurate than the popular model reduction method namely POD based L‐NIROM.
ISSN:0029-5981
1097-0207
DOI:10.1002/nme.7240