Loading…

RFCSC: Communication efficient reinforcement federated learning with dynamic client selection and adaptive gradient compression

In the field of public safety, high-quality data is often owned by governments, companies, and organizations, making it difficult to train effective models through centralized datasets. This paper leverages federated learning as a mechanism to address data privacy concerns. Within the federated lear...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2025-01, Vol.612, p.128672, Article 128672
Main Authors: Pan, Zhenhui, Li, Yawen, Guan, Zeli, Liang, Meiyu, Li, Ang, Wang, Jia, Kou, Feifei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the field of public safety, high-quality data is often owned by governments, companies, and organizations, making it difficult to train effective models through centralized datasets. This paper leverages federated learning as a mechanism to address data privacy concerns. Within the federated learning framework, the data across different clients is typically Non-IID (non-Independently and Identically Distributed). Furthermore, The complexity of sensitive image recognition tasks in public safety, along with the large number of model parameters, can lead to communication congestion issues in federated learning. To solve these challenges, this paper proposes a Reinforcement Federated Client Selection and Gradient Compression Method(RFCSC). By integrating client data prototypes with accuracy metrics, we dynamically assess the contribution of each client in federated learning. Through an intelligent dynamic incentive mechanism based on reinforcement learning, high-quality client nodes are dynamically selected to participate in federated learning, achieving dynamic adaptive aggregation, reducing the influence of Non-IID data, reducing communication cost, enhancing model accuracy, and achieving a balance between quality and efficiency. To address the issue of high communication cost for parameter transmission, we propose an adaptive model compression strategy in reinforcement federated learning which make local clients find an appropriate compression rate. This approach not only reduces gradient communication overhead but also minimizes the impact of compression on model accuracy. The effectiveness of our proposed approach are corroborated through comprehensive experiments conducted on one private dataset and two public datasets.
ISSN:0925-2312
DOI:10.1016/j.neucom.2024.128672