Loading…

Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data

Federated Learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model, without the need to share their local data. It is a promising solution for telemonitoring systems that demand intensive data collection, for detection, classificatio...

Full description

Saved in:
Bibliographic Details
Published in:Future generation computer systems 2022-03, Vol.128, p.406-419
Main Authors: Abdellatif, Alaa Awad, Mhaisen, Naram, Mohamed, Amr, Erbad, Aiman, Guizani, Mohsen, Dawy, Zaher, Nasreddine, Wassim
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated Learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model, without the need to share their local data. It is a promising solution for telemonitoring systems that demand intensive data collection, for detection, classification, and prediction of future events, from different locations while maintaining a strict privacy constraint. Due to privacy concerns and critical communication bottlenecks, it can become impractical to send the FL updated models to a centralized server. Thus, this paper studies the potential of hierarchical FL in Internet of Things (IoT) heterogeneous systems. In particular, we propose an optimized solution for user assignment and resource allocation over hierarchical FL architecture for IoT heterogeneous systems. This work focuses on a generic class of machine learning models that are trained using gradient-descent-based schemes while considering the practical constraints of non-uniformly distributed data across different users. We evaluate the proposed system using two real-world datasets, and we show that it outperforms state-of-the-art FL solutions. Specifically, our numerical results highlight the effectiveness of our approach and its ability to provide 4–6% increase in the classification accuracy, with respect to hierarchical FL schemes that consider distance-based user assignment. Furthermore, the proposed approach could significantly accelerate FL training and reduce communication overhead by providing 75–85% reduction in the communication rounds between edge nodes and the centralized server, for the same model accuracy. •Design a distributed learning system with hierarchical Federated Learning (HFL).•Proposing an optimized solution for user assignment and resource allocation over HFL.•Studying the imbalanced data effect on the obtained accuracy and convergence time.•Evaluating the performance of the proposed approach using real-world datasets.
ISSN:0167-739X
1872-7115
DOI:10.1016/j.future.2021.10.016