Loading…

Multi-scale physics-informed machine learning using the Buckingham Pi theorem

Neural networks are a form of machine learning that can be trained to estimate relationships between variables in complex physical processes. They are particularly adept at estimating relationships between variables that lie within the ranges of values for which they have been trained. Their perform...

Full description

Saved in:
Bibliographic Details
Published in:Journal of computational physics 2023-02, Vol.474, p.111810, Article 111810
Main Authors: Oppenheimer, Michael W., Doman, David B., Merrick, Justin D.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Neural networks are a form of machine learning that can be trained to estimate relationships between variables in complex physical processes. They are particularly adept at estimating relationships between variables that lie within the ranges of values for which they have been trained. Their performance often diminishes when tasked with generating estimates of physical processes that lie outside of the region of the input space covered by the training set. In the case of physical systems, the possible relationships between input and output variables are limited. Dimensional variables can be replaced by a smaller number of dimensionless parameters that enforce physical limitations between dimensional input and output variables. This can be accomplished using dimensional analysis and the Buckingham Pi theorem to enforce or test for dynamic similitude between systems operating at different scales. This process can be exploited for two purposes. The first is to reduce the number of variables correlated by a neural network. The second is to allow a dimensionless neural network to be trained to function as an interpolator between dimensionless input and output parameters even though a neural network trained using dimensional data would be required to extrapolate if the dimensional input variables lie outside of the training set. Using dimensionless data to fit an input-output relationship generalizes better, as compared to using dimensional data, when dynamic similitude between systems has been achieved. Examples are presented that demonstrate that the proposed process can enable accurate modeling of the behavior of physically similar systems operating at different scales. •Physical constraints imposed based on consideration of base units.•Dimensional analysis and Buckingham Pi theorem non-dimensionalize systems.•Unitless similitude parameters become inputs to reduced size neural network.•Unitless network interpolates in situations where dimensioned networks extrapolate.•Neural networks trained at one scale predict similar physics at different scales.
ISSN:0021-9991
1090-2716
DOI:10.1016/j.jcp.2022.111810