Loading…
Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation
Membership inference (MI) attacks are more diverse in a Federated Learning (FL) setting, because an adversary may be either an FL client, a server, or an external attacker. Existing defenses against MI attacks rely on perturbations to either the model's output predictions or the training proces...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Membership inference (MI) attacks are more diverse in a Federated Learning (FL) setting, because an adversary may be either an FL client, a server, or an external attacker. Existing defenses against MI attacks rely on perturbations to either the model's output predictions or the training process. However, output perturbations are ineffective in an FL setting, because a malicious server can access the model without output perturbation while training perturbations struggle to achieve a good utility. This paper proposes a novel defense, called CIP, to fortify FL against MI attacks via a client-level input perturbation during training and inference procedures. The key insight is to shift each client's local data distribution via a personalized perturbation to get a shifted model. CIP achieves a good balance between privacy and utility. Our evaluation shows that CIP causes accuracy to drop at most 0.7% while reducing attacks to random guessing. |
---|---|
ISSN: | 2158-3927 |
DOI: | 10.1109/DSN58367.2023.00037 |