Loading…

Shielding Federated Learning: A New Attack Approach and Its Defense

Federated learning (FL) is a newly emerging distributed learning framework that is communication-efficient with user privacy guarantee. Wireless end-user devices can collaboratively train a global model while keeping their local training data private. Nevertheless, recent studies show that FL is hig...

Full description

Saved in:
Bibliographic Details
Main Authors: Wan, Wei, Lu, Jianrong, Hu, Shengshan, Zhang, Leo Yu, Pei, Xiaobing
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated learning (FL) is a newly emerging distributed learning framework that is communication-efficient with user privacy guarantee. Wireless end-user devices can collaboratively train a global model while keeping their local training data private. Nevertheless, recent studies show that FL is highly susceptible to attacks from malicious users since the server cannot directly access and audit the user's local training data. In this work, we identify a new kind of attack surface that is much easier to be carried out while remaining a high attack success rate. By exploiting the inherent flaw of the weight assignment strategy in the standard federated learning process, our attack can bypass the existing defense methods and damage the performance of the global model effectively. We then propose a new density-based detection strategy to defend against such attack by modeling the problem as anomaly detection to effectively detect anomalous updates. Experimental results on two typical datasets, MNIST and CIFAR-10, show that our attack can significantly affect the convergence of the aggregated model and reduce the accuracy of the global model. This holds true even the state-of-the-art defense strategies are deployed, while our newly proposed defense can effectively mitigate such attack.
ISSN:1558-2612
DOI:10.1109/WCNC49053.2021.9417334