Loading…

Lins: Reducing Communication Overhead of ZeRO for Efficient LLM Training

Training large language models (LLMs) encounters challenges in GPU memory consumption due to the high memory requirements of model states. The widely used Zero Redundancy Optimizer (ZeRO) addresses this issue through strategic sharding but introduces communication challenges at scale. To tackle this...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen, Qiaoling, Hu, Qinghao, Wang, Guoteng, Xiong, Yingtong, Huang, Ting, Chen, Xun, Gao, Yang, Yan, Hang, Wen, Yonggang, Zhang, Tianwei, Sun, Peng
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Training large language models (LLMs) encounters challenges in GPU memory consumption due to the high memory requirements of model states. The widely used Zero Redundancy Optimizer (ZeRO) addresses this issue through strategic sharding but introduces communication challenges at scale. To tackle this problem, we propose Lins, a system designed to optimize ZeRO for scalable LLM training. Lins incorporates three flexible sharding strategies: Full-Replica, Full-Sharding, and Partial-Sharding, and allows each component within the model states (Parameters, Gradients, Optimizer States) to independently choose a sharding strategy as well as the device mesh. We conduct a thorough analysis of communication costs, formulating an optimization problem to discover the optimal sharding strategy. Evaluations demonstrate up to 52% Model FLOPs Utilization (MFU) when training the LLaMA-based model on 1024 GPUs, resulting in a 1.56 times improvement in training throughput compared to newly proposed systems like MiCS and ZeRO++.
ISSN:2766-8568
DOI:10.1109/IWQoS61813.2024.10682856