Loading…

Computational homogenization at extreme scales

Multi-scale simulations at extreme scales in terms of both physical length scales and computational resources are presented. In this letter, we introduce a hierarchically parallel computational homogenization solver that employs hundreds of thousands of computing cores and resolves in material lengt...

Full description

Saved in:
Bibliographic Details
Published in:Extreme Mechanics Letters 2016-03, Vol.6 (C)
Main Authors: Mosby, Matthew, Matouš, Karel
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi-scale simulations at extreme scales in terms of both physical length scales and computational resources are presented. In this letter, we introduce a hierarchically parallel computational homogenization solver that employs hundreds of thousands of computing cores and resolves in material length scales (from to ). Simulations of this kind are important in understanding the multi-scale essence of many natural and synthetically made materials. Thus, we present a simulation consisting of 53.8 Billion finite elements with 28.1 Billion nonlinear equations that is solved on 393,216 computing cores (786,432 threads). The excellent parallel performance of the computational homogenization solver is demonstrated by a strong scaling test from 4,096 to 262,144 cores. A fully coupled multi-scale damage simulation shows a complex crack profile at the micro-scale and the macroscopic crack tunneling phenomenon. Such large and predictive simulations are an important step towards Virtual Materials Testing and can aid in development of new material formulations with extreme properties. Furthermore, the high computational efficiency of our computational homogenization solver holds great promise for utilizing the next generation of exascale parallel computing platforms that are expected to accelerate computations through orders of magnitude increase in parallelism rather than speed of each processor.
ISSN:2352-4316
2352-4316
DOI:10.1016/j.eml.2015.12.009