Loading…
Automated construction of effective potential via algorithmic implicit bias
We introduce a novel approach for decomposing and learning every scale of a given multiscale objective function in Rd, where d⩾1. This approach leverages a recently demonstrated implicit bias of the optimization method of gradient descent [44], which enables the automatic generation of data that nea...
Saved in:
Published in: | Journal of computational physics 2024-10, Vol.514, p.113206, Article 113206 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We introduce a novel approach for decomposing and learning every scale of a given multiscale objective function in Rd, where d⩾1. This approach leverages a recently demonstrated implicit bias of the optimization method of gradient descent [44], which enables the automatic generation of data that nearly follow Gibbs distribution with an effective potential at any desired scale. One application of this automated effective potential modeling is to construct reduced-order models. For instance, a deterministic surrogate Hamiltonian model can be developed to substantially soften the stiffness that bottlenecks the simulation, while maintaining the accuracy of phase portraits at the scale of interest. Similarly, a stochastic surrogate model can be constructed at a desired scale, such that both its equilibrium and out-of-equilibrium behaviors (characterized by auto-correlation function and mean path) align with those of a damped mechanical system with the original multiscale function being its potential. The robustness and efficiency of our proposed approach in multi-dimensional scenarios have been demonstrated through a series of numerical experiments. A by-product of our development is a method for anisotropic noise estimation and calibration. More precisely, Langevin model of stochastic mechanical systems may not have isotropic noise in practice, and we provide a systematic algorithm to quantify its covariance matrix without directly measuring the noise. In this case, the system may not admit closed form expression of its invariant distribution either, but with this tool, we can design friction matrix appropriately to calibrate the system so that its invariant distribution has a closed form expression of Gibbs.
•We are able to learn each component of V=V0+∑j=1KVj,εj at different scales without explicit access to each scale or knowing εj values a priori.•Once an effective potential Uk(q) approximating V up to scale εk is learnt, we construct surrogate models for both deterministic and stochastic dynamical systems which preserve the original properties.•We have developed a method to estimate and calibrate anisotropic stochastic forcings for kinetic Langevin systems by using equilibrium properties and adjusting the mass matrix. |
---|---|
ISSN: | 0021-9991 |
DOI: | 10.1016/j.jcp.2024.113206 |