Loading…
Feature importance in neural networks as a means of interpretation for data-driven turbulence models
This work aims at making the prediction process of neural network-based turbulence models more transparent. Due to its black-box ingredients, the model’s predictions cannot be anticipated. Therefore, this paper is concerned with the quantification of each feature’s importance for the prediction of t...
Saved in:
Published in: | Computers & fluids 2023-10, Vol.265, p.105993, Article 105993 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This work aims at making the prediction process of neural network-based turbulence models more transparent. Due to its black-box ingredients, the model’s predictions cannot be anticipated. Therefore, this paper is concerned with the quantification of each feature’s importance for the prediction of trained and fixed NNs, which is one possibly type of explanation for opaque models. Two conceptually different attribution methods, namely permutation feature importance and DeepSHAP, are chosen in order to assess global, regional and local feature importance. The neuralSST turbulence model, which serves as an example, will be investigated in greater detail. While the global importance scores provide a quick and reliable way to detect irrelevant features and may thus be used for feature selection, only the (semi-)local analysis provides meaningful and trustworthy interpretations of the model. In fact, the local importance scores suggest that hypotheses with a common high-level influence on the turbulence model, e.g. adjusting the net production of turbulent kinetic energy or the Reynolds stress anisotropy, are similarly affected by local mean flow structures such as attached boundary layers, free shear layers or recirculation zones.
•Concise review of feature attribution methods for neural networks (NNs).•Potential analysis of attribution methods for explaining NN-based closures.•Comparison of the explanations of two conceptually different attribution methods.•Global, regional and local feature importance analysis of the neuralSST model.•Local methods provide more detailed results and do not depend on grid resolution. |
---|---|
ISSN: | 0045-7930 1879-0747 |
DOI: | 10.1016/j.compfluid.2023.105993 |