Loading…
Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines
This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp−1 global continuity of the isogeometric solution, both the computational cost and the communication...
Saved in:
Published in: | Computer methods in applied mechanics and engineering 2015-02, Vol.284, p.971-987 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp−1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.
•We estimate computational cost of isogeometric solver on distributed memory parallel machines.•We show p2 scalability as we increase the global continuity, in 2D and 3D.•We show O(N) and O(N4/3) costs for 2D and 3D parallel direct solvers.•We verify the costs experimentally on STAMPEDE Linux cluster from TACC.•We test MUMPS, PaSTiX, and SuperLU, through PETIGA toolkit built on top of PETSc. |
---|---|
ISSN: | 0045-7825 1879-2138 |
DOI: | 10.1016/j.cma.2014.11.020 |