Loading…
Differentially Private Federated Learning in Edge Networks: The Perspective of Noise Reduction
The proliferation of distributed sensitive data in recent years in network edge devices motivates the introduction of edge computing which moves machine learning (ML) applications from the data center to the edge of the network. On the other hand, recent demands on data privacy have called for feder...
Saved in:
Published in: | IEEE network 2022-09, Vol.36 (5), p.167-172 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The proliferation of distributed sensitive data in recent years in network edge devices motivates the introduction of edge computing which moves machine learning (ML) applications from the data center to the edge of the network. On the other hand, recent demands on data privacy have called for federated learning (FL) as a new distributed paradigm. The inherent privacy protection nature of FL makes FL in edge computing a prospective framework, especially for application scenarios where privacy protection and resource utilization are critical. Nevertheless, FL also suffers from privacy leakage as the exchanged messages between edge devices and the edge server can be revealed. As such, differential privacy has drawn great attention for privacy protection in the edge FL system for its extremely low computation cost, which is readily implemented by adding well-designed noise to target data/models. However, the added noise will deteriorate learning performance, and it is challenging to get a satisfactory trade-off between privacy protection and learning performance. This article gives the first systematic study on the framework of differentially private FL in edge networks from the perspective of noise reduction. To this end, three noise reduction methods are summarized based on the intrinsic factors influencing the added noise scale, including privacy amplification, model sparsification, and sensitivity reduction. Furthermore, we discuss the ongoing challenges and propose some future directions where differential privacy can be implemented to obtain a better trade-off between privacy and learning performance. |
---|---|
ISSN: | 0890-8044 1558-156X |
DOI: | 10.1109/MNET.001.2200204 |