Loading…
Robust Federated Averaging via Outlier Pruning
Federated Averaging (FedAvg) is the baseline Federated Learning (FL) algorithm that applies the stochastic gradient descent for local model training and the arithmetic averaging of the local models' parameters for global model aggregation. Succeeding FL works commonly utilize the arithmetic ave...
Saved in:
Published in: | IEEE signal processing letters 2022, Vol.29, p.409-413 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Federated Averaging (FedAvg) is the baseline Federated Learning (FL) algorithm that applies the stochastic gradient descent for local model training and the arithmetic averaging of the local models' parameters for global model aggregation. Succeeding FL works commonly utilize the arithmetic averaging scheme of FedAvg for the aggregation. However, such arithmetic averaging is prone to the outlier model-updates, especially when the clients' data are non-Independent and Identically Distributed (non-IID). As such, the classical aggregation approach suffers from the dominance of the outlier updates and, consequently, causes high communication costs towards producing a decent global model. In this letter, we propose a robust aggregation strategy to alleviate the above issues. In particular, we propose first pruning the node-wise outlier updates (weights) from the local trained models and then performing the aggregation on the selected effective weights-set at each node. We provide the theoretical result of our method and conduct extensive experiments on the MNIST, CIFAR-10, and Shakespeare datasets with IID and non-IID settings, which demonstrate that our aggregation approach outperforms the state-of-the-art methods in terms of communication speedup, test-set performance and training convergence. |
---|---|
ISSN: | 1070-9908 1558-2361 |
DOI: | 10.1109/LSP.2021.3134893 |