Loading…
PriFairFed: A Local Differentially Private Federated Learning Algorithm for Client-Level Fairness
Local Differential Privacy (LDP) is a mechanism used to protect training privacy in Federated Learning (FL) systems, typically by introducing noise to data and local models. However, in real-world distributed edge systems, the non-independent and identically distributed nature of data means that cli...
Saved in:
Published in: | IEEE transactions on mobile computing 2024-12, p.1-12 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Magazinearticle |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Local Differential Privacy (LDP) is a mechanism used to protect training privacy in Federated Learning (FL) systems, typically by introducing noise to data and local models. However, in real-world distributed edge systems, the non-independent and identically distributed nature of data means that clients in FL systems experience varying sensitivities to LDP-introduced noise. This disparity leads to fairness issues, potentially discouraging marginal clients from contributing further. In this paper, we explore how to enhance client-level performance fairness under LDP conditions. We model an FL system with LDP and formulate the problem PriFair using regularization, which assigns varied noise amplitudes to clients based on federated analytics. Additionally, we develop PriFairFed, a Tikhonov regularization-based algorithm that eliminates variable dependencies and optimizes variables alternately, while also offering a theoretical privacy guarantee. We further experimented with the algorithm on a real-world system with 20 Raspberry Pi clients, showing up to a 73.2% improvement in client-level fairness compared to existing state-of-the-art approaches, while maintaining a comparable level of privacy. |
---|---|
ISSN: | 1536-1233 1558-0660 |
DOI: | 10.1109/TMC.2024.3516813 |