Loading…

MODEL: A Model Poisoning Defense Framework for Federated Learning via Truth Discovery

Federated learning (FL) is an emerging paradigm for privacy-preserving machine learning, in which multiple clients collaborate to generate a global model through training individual models with local data. However, FL is vulnerable to model poisoning attacks (MPAs) as malicious clients are able to d...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on information forensics and security 2024, Vol.19, p.8747-8759
Main Authors: Wu, Minzhe, Zhao, Bowen, Xiao, Yang, Deng, Congjian, Liu, Yuan, Liu, Ximeng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated learning (FL) is an emerging paradigm for privacy-preserving machine learning, in which multiple clients collaborate to generate a global model through training individual models with local data. However, FL is vulnerable to model poisoning attacks (MPAs) as malicious clients are able to destroy the global model by modifying local models. Although numerous model poisoning defense methods are extensively studied, they remain vulnerable to newly proposed optimized MPAs and are constrained by the necessity to presume a certain proportion of malicious clients. To this end, in this paper, we propose MODEL, a model poisoning defense framework for FL through truth discovery (TD). A distinctive aspect of MODEL is its ability to effectively prevent both optimized and byzantine MPAs. Furthermore, it requires no presupposed threshold for different settings of malicious clients (e.g., less than 33% or no more than 50%). Specifically, a TD-based metric and a clustering-based filtering mechanism are proposed to evaluate local models and avoid presupposing a threshold. Furthermore, MODEL is effective for non-independent and identically distributed (non-IID) training data. In addition, inspired by game theory, we incorporate a truthful and fair incentive mechanism in MODEL to encourage active client participation while mitigating the potential desire for attacks from malicious clients. Extensively comparative experiments demonstrate that MODEL effectively safeguards against optimized MPAs and outperforms the state-of-the-art.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2024.3461449