Loading…

A novel HPL-AI approach for FP16-only accelerator and its instantiation on Kunpeng+Ascend AI-specific platform

HPL-AI, also known as HPL-MxP, is a new benchmark program used to evaluate the upper-bound performance of AI-related tasks on a specific computing cluster. It solves a large linear equation system in FP64, preconditioned by complete LU factorization in lower precision. In this paper, we propose a ne...

Full description

Saved in:
Bibliographic Details
Published in:Journal of parallel and distributed computing 2024-08, Vol.190, p.104884, Article 104884
Main Authors: Cao, Zijian, Sun, Qiao, Yang, Wenhao, Song, Changcheng, Wang, Zhe, Li, Huiyuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:HPL-AI, also known as HPL-MxP, is a new benchmark program used to evaluate the upper-bound performance of AI-related tasks on a specific computing cluster. It solves a large linear equation system in FP64, preconditioned by complete LU factorization in lower precision. In this paper, we propose a new HPL-AI approach that relies on the factorization of the coefficient matrix in mixed precision: FP32 diagonals and FP16 off-diagonals. Without compromising the quality of the resultant LU preconditioner, the proposed approach only utilizes the primitive of dense matrix multiplication in FP16 on the accelerator, maximizing the FP16 throughput. Numerical analysis and experiments validate our approach, ensuring avoidance of numerical underflow or overflow during factorization. We implement the proposed approach on Kunpeng+Ascend clusters, a novel AI-specific platform with exceedingly high FP16 peak performance. By applying various optimization techniques, including 2D lookahead, HCCL-based communication pipeline, and SYCL-based tasks overlapping, we achieve 975 TFlops on a single node and nearly 100 PFlops on a cluster of 128 nodes, with a weak scalability of 79.8%. •First-time adaptation and optimization of HPL-AI on the Kunpeng+Ascend platform.•A novel approach for mixed-precision LU: FP32 diagonals and FP16 off-diagonals.•Over/underflow avoidance: error analysis and magnitude estimation of HPL-AI LU.•On a single node, 8 Ascend 910A Pro AI accelerators achieve 42.3% HPL-AI efficiency.•On a 128-node cluster: 98.9 PFlops HPL-AI performance and 79.8% weak scalability.
ISSN:0743-7315
1096-0848
DOI:10.1016/j.jpdc.2024.104884