Loading…

Clustered Scheduling and Communication Pipelining for Efficient Resource Management of Wireless Federated Learning

This paper proposes using communication pipelining to enhance the convergence speed of federated learning in mobile edge computing applications. Due to limited wireless sub-channels, a subset of the total clients is scheduled in each iteration of federated learning algorithms. On the other hand, the...

Full description

Saved in:
Bibliographic Details
Published in:IEEE internet of things journal 2023-08, Vol.10 (15), p.1-1
Main Authors: Kececi, Cihat, Shaqfeh, Mohammad, Al-Qahtani, Fawaz, Ismail, Muhammad, Serpedin, Erchin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper proposes using communication pipelining to enhance the convergence speed of federated learning in mobile edge computing applications. Due to limited wireless sub-channels, a subset of the total clients is scheduled in each iteration of federated learning algorithms. On the other hand, the scheduled clients wait for the slowest client to finish its computation. We propose to first cluster the clients based on the time they need per iteration to compute the local gradients of the federated learning model. Then, we schedule a mixture of clients from all clusters to send their local updates in a pipelined manner. In this way, instead of just waiting for the slower clients to finish their computations, more clients can participate in each iteration. While the time duration of a single iteration does not change, the proposed method can significantly reduce the number of required iterations to achieve a target accuracy. We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution. We also provide numerical results to demonstrate the gains of the proposed method for different datasets and deep learning architectures.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2023.3262620