Loading…

Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection

The Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-based IDS (FL-IDS) was proposed. In each round of...

Full description

Saved in:
Bibliographic Details
Published in:Computers & security 2023-06, Vol.129, p.103205, Article 103205
Main Authors: Lai, Yuan-Cheng, Lin, Jheng-Yan, Lin, Ying-Dar, Hwang, Ren-Hung, Lin, Po-Chin, Wu, Hsiao-Kuang, Chen, Chung-Kuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated Learning-based IDS (FL-IDS) was proposed. In each round of federated learning, each participant first trains its local model and sends the model's weights to the global server, which then aggregates the received weights and distributes the aggregated global model to participants. An attacker will use poisoning attacks, including label-flipping attacks and backdoor attacks, to directly generate a malicious local model and indirectly pollute the global model. Currently, a few studies defend against poisoning attacks, but they only discuss label-flipping attacks in the image field. Therefore, we propose a two-phase defense mechanism, called Defending Poisoning Attacks in Federated Learning (DPA-FL), applied to intrusion detection. The first phase employs relative differences to quickly compare weights between participants because the local models of attackers and benign participants are quite different. The second phase tests the aggregated model with the dataset and tries to find the attackers when its accuracy is low. Experiment results show that DPA-FL can reach 96.5% accuracy in defending against poisoning attacks. Compared with other defense mechanisms, DPA-FL can improve F1-score by 20∼64% under backdoor attacks. Also, DPA-FL can exclude the attackers within twelve rounds when the attackers are few.
ISSN:0167-4048
1872-6208
DOI:10.1016/j.cose.2023.103205