Loading…
FLIS: Clustered Federated Learning via Inference Similarity for Non-IID Data Distribution
Conventional federated learning (FL) approaches are ineffective in scenarios where clients have significant differences in the distributions of their local data. The Non-IID data distribution in the client data causes a drift in the local model updates from the global optima, which significantly imp...
Saved in:
Published in: | IEEE open journal of the Computer Society 2023-01, Vol.4, p.1-12 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Conventional federated learning (FL) approaches are ineffective in scenarios where clients have significant differences in the distributions of their local data. The Non-IID data distribution in the client data causes a drift in the local model updates from the global optima, which significantly impacts the performance of the trained models. In this paper, we present a new algorithm called FLIS that aims to address this problem by grouping clients into clusters that have jointly trainable data distributions. This is achieved by comparing the inference similarity of client models. Our proposed framework captures settings where different groups of users may have their own objectives (learning tasks), but by aggregating their data with others in the same cluster (same learning task), superior models can be derived via more efficient and personalized federated learning. We present experimental results to demonstrate the benefits of FLIS over the state-of-the-art approaches on the CIFAR-100/10, SVHN, and FMNIST datasets. Our code is available at https://github.com/MMorafah/FLIS . |
---|---|
ISSN: | 2644-1268 2644-1268 |
DOI: | 10.1109/OJCS.2023.3262203 |