Loading…

Bagua: scaling up distributed learning with system relaxations

Recent years have witnessed a growing list of systems for distributed data-parallel training. Existing systems largely fit into two paradigms, i.e., parameter server and MPI-style collective operations. On the algorithmic side, researchers have proposed a wide range of techniques to lower the commun...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings of the VLDB Endowment 2021-12, Vol.15 (4), p.804-813
Main Authors: Gan, Shaoduo, Jiang, Jiawei, Yuan, Binhang, Zhang, Ce, Lian, Xiangru, Wang, Rui, Chang, Jianbin, Liu, Chengjun, Shi, Hongmei, Zhang, Shengzhuo, Li, Xianghong, Sun, Tengxu, Yang, Sen, Liu, Ji
Format: Article
Language:English
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent years have witnessed a growing list of systems for distributed data-parallel training. Existing systems largely fit into two paradigms, i.e., parameter server and MPI-style collective operations. On the algorithmic side, researchers have proposed a wide range of techniques to lower the communication via "system relaxations": quantization, decentralization, and communication delay. However, most, if not all, existing systems only rely on standard synchronous and asynchronous stochastic gradient (SG) based optimization, therefore, cannot take advantage of all possible optimizations that the machine learning community has been developing recently. Given this emerging gap between the current landscapes of systems and theory, we build Bagua, a MPI-style communication library, providing a collection of primitives, that is both flexible and modular to support state-of-the-art system relaxation techniques of distributed training. Powered by this design, Bagua has a great ability to implement and extend various state-of-the-art distributed learning algorithms. In a production cluster with up to 16 machines (128 GPUs), Bagua can outperform PyTorch-DDP, Horovod and BytePS in the end-to-end training time by a significant margin (up to 2X) across a diverse range of tasks. Moreover, we conduct a rigorous tradeoff exploration showing that different algorithms and system relaxations achieve the best performance over different network conditions.
ISSN:2150-8097
2150-8097
DOI:10.14778/3503585.3503590