Loading…

DynNet: Physics-based neural architecture design for nonlinear structural response modeling and prediction

•A data-driven method for complete dynamic response estimation of nonlinear MDOF systems is presented.•A physics-based neural network termed DynNet is developed that is inspired by Newmark's Method.•Optimization techniques are proposed to accelerate the learnability of the network.•DynNet has n...

Full description

Saved in:
Bibliographic Details
Published in:Engineering structures 2021-02, Vol.229, p.111582, Article 111582
Main Authors: Sadeghi Eshkevari, Soheil, Takáč, Martin, Pakzad, Shamim N., Jahani, Majid
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A data-driven method for complete dynamic response estimation of nonlinear MDOF systems is presented.•A physics-based neural network termed DynNet is developed that is inspired by Newmark's Method.•Optimization techniques are proposed to accelerate the learnability of the network.•DynNet has notably smaller variable space and performs desirably in extreme nonlinear cases.•DynNet has wide applications in SHM and regional reliability assessments. Data-driven models for predicting dynamic responses of linear and nonlinear systems are of great importance due to their wide application from probabilistic analysis to inverse problems such as system identification and damage diagnosis. In this study, a physics-based recurrent neural network model is designed that is able to estimate the dynamics of linear and nonlinear multiple degrees of freedom systems given the ground motions. The model is able to estimate a complete set of responses, including displacement, velocity, acceleration, and internal forces. Compared to the most advanced counterparts, this model requires smaller number of trainable variables while the accuracy of predictions is higher for long trajectories. In addition, the architecture of the recurrent block is inspired by differential equation solver algorithms and it is expected that this approach yields more generalized solutions. In the training phase, we propose multiple novel techniques to substantially accelerate the learning process using smaller datasets, such as hardsampling, utilization of a trajectory loss function, and implementation of a trust-region optimization approach. Numerical case studies are conducted to examine the strength of the network to learn different nonlinear behaviors. It is shown that the network is able to capture different nonlinear behaviors of dynamic systems with high accuracy and with no need for prior information or very large datasets.
ISSN:0141-0296
1873-7323
DOI:10.1016/j.engstruct.2020.111582