Loading…

FedDGIC: Reliable and Efficient Asynchronous Federated Learning with Gradient Compensation

Asynchronous federated learning is a distributed machine learning paradigm that may alleviate the impact of straggler nodes and improve the efficiency of federated training. However, some nodes can become sluggish, and node dropout may frequently happen for various reasons, such as network connectio...

Full description

Saved in:
Bibliographic Details
Main Authors: Xie, Zaipeng, Jiang, Junchen, Chen, Ruifeng, Qu, Zhihao, Liu, Hanxiang
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Asynchronous federated learning is a distributed machine learning paradigm that may alleviate the impact of straggler nodes and improve the efficiency of federated training. However, some nodes can become sluggish, and node dropout may frequently happen for various reasons, such as network connection constraints, energy deficits, and system faults. Consequently, the global model may deviate from the desired convergence direction and lead to suboptimal results. This work proposes an asynchronous federated learning framework, FedDGIC, to mitigate the impact of the node dropout problem. The proposed framework can improve training efficiency by utilizing a dynamic grouping algorithm with gradient compensation. Experiments are performed in a real federated learning environment using two datasets, i.e., MNIST and CIFAR-10. Compared with three state-of-the-art methods, the proposed FedDGIC can significantly improve training efficiency and provide reliable asynchronous federated learning.
ISSN:2690-5965
DOI:10.1109/ICPADS56603.2022.00021