Loading…

AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate s...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-11
Main Authors: Gong, Zirui, Shen, Liyue, Zhang, Yanjun, Leo Yu Zhang, Wang, Jingwei, Bai, Guangdong, Xiang, Yong
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Gong, Zirui
Shen, Liyue
Zhang, Yanjun
Leo Yu Zhang
Wang, Jingwei
Bai, Guangdong
Xiang, Yong
description The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve the robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model's robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.
doi_str_mv 10.48550/arxiv.2311.06996
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2889798210</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2889798210</sourcerecordid><originalsourceid>FETCH-LOGICAL-a956-9ede33da37bc3bb2f526adf05ff04e15c248dbf3ae2ce746c8cf7271fe46d46c3</originalsourceid><addsrcrecordid>eNotjV1LwzAYhYMgOOZ-gHcBrzuTN02aelfmvqDiGPN6pPnoMmtTm078-Vbm1eE8cM6D0AMl81RyTp5U_-O_58AonROR5-IGTYAxmsgU4A7NYjwTQkBkwDmboLpY74vXXbldbZf7Z_xinW2Nb2u8ssb2arAGl1b17R8qauXbOOBd8DFcyTAo_RHx4dSHS33CZdCqwe-dGYe4-Owa77xWgw_tPbp1qol29p9TdFgtD4tNUr6tt4uiTFTORZKPUsaMYlmlWVWB4yCUcYQ7R1JLuYZUmsoxZUHbLBVaapdBRp1NhRkrm6LH623Xh6-LjcPxHC59OxqPIGWe5RIoYb_-SVkG</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2889798210</pqid></control><display><type>article</type><title>AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification</title><source>Publicly Available Content Database</source><creator>Gong, Zirui ; Shen, Liyue ; Zhang, Yanjun ; Leo Yu Zhang ; Wang, Jingwei ; Bai, Guangdong ; Xiang, Yong</creator><creatorcontrib>Gong, Zirui ; Shen, Liyue ; Zhang, Yanjun ; Leo Yu Zhang ; Wang, Jingwei ; Bai, Guangdong ; Xiang, Yong</creatorcontrib><description>The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve the robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model's robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2311.06996</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Amplification ; Efficiency ; Machine learning ; Poisoning ; Robustness</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2889798210?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Gong, Zirui</creatorcontrib><creatorcontrib>Shen, Liyue</creatorcontrib><creatorcontrib>Zhang, Yanjun</creatorcontrib><creatorcontrib>Leo Yu Zhang</creatorcontrib><creatorcontrib>Wang, Jingwei</creatorcontrib><creatorcontrib>Bai, Guangdong</creatorcontrib><creatorcontrib>Xiang, Yong</creatorcontrib><title>AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification</title><title>arXiv.org</title><description>The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve the robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model's robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.</description><subject>Accuracy</subject><subject>Amplification</subject><subject>Efficiency</subject><subject>Machine learning</subject><subject>Poisoning</subject><subject>Robustness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjV1LwzAYhYMgOOZ-gHcBrzuTN02aelfmvqDiGPN6pPnoMmtTm078-Vbm1eE8cM6D0AMl81RyTp5U_-O_58AonROR5-IGTYAxmsgU4A7NYjwTQkBkwDmboLpY74vXXbldbZf7Z_xinW2Nb2u8ssb2arAGl1b17R8qauXbOOBd8DFcyTAo_RHx4dSHS33CZdCqwe-dGYe4-Owa77xWgw_tPbp1qol29p9TdFgtD4tNUr6tt4uiTFTORZKPUsaMYlmlWVWB4yCUcYQ7R1JLuYZUmsoxZUHbLBVaapdBRp1NhRkrm6LH623Xh6-LjcPxHC59OxqPIGWe5RIoYb_-SVkG</recordid><startdate>20231123</startdate><enddate>20231123</enddate><creator>Gong, Zirui</creator><creator>Shen, Liyue</creator><creator>Zhang, Yanjun</creator><creator>Leo Yu Zhang</creator><creator>Wang, Jingwei</creator><creator>Bai, Guangdong</creator><creator>Xiang, Yong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231123</creationdate><title>AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification</title><author>Gong, Zirui ; Shen, Liyue ; Zhang, Yanjun ; Leo Yu Zhang ; Wang, Jingwei ; Bai, Guangdong ; Xiang, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a956-9ede33da37bc3bb2f526adf05ff04e15c248dbf3ae2ce746c8cf7271fe46d46c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Amplification</topic><topic>Efficiency</topic><topic>Machine learning</topic><topic>Poisoning</topic><topic>Robustness</topic><toplevel>online_resources</toplevel><creatorcontrib>Gong, Zirui</creatorcontrib><creatorcontrib>Shen, Liyue</creatorcontrib><creatorcontrib>Zhang, Yanjun</creatorcontrib><creatorcontrib>Leo Yu Zhang</creatorcontrib><creatorcontrib>Wang, Jingwei</creatorcontrib><creatorcontrib>Bai, Guangdong</creatorcontrib><creatorcontrib>Xiang, Yong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gong, Zirui</au><au>Shen, Liyue</au><au>Zhang, Yanjun</au><au>Leo Yu Zhang</au><au>Wang, Jingwei</au><au>Bai, Guangdong</au><au>Xiang, Yong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification</atitle><jtitle>arXiv.org</jtitle><date>2023-11-23</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve the robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model's robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2311.06996</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2889798210
source Publicly Available Content Database
subjects Accuracy
Amplification
Efficiency
Machine learning
Poisoning
Robustness
title AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T20%3A51%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AGRAMPLIFIER:%20Defending%20Federated%20Learning%20Against%20Poisoning%20Attacks%20Through%20Local%20Update%20Amplification&rft.jtitle=arXiv.org&rft.au=Gong,%20Zirui&rft.date=2023-11-23&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2311.06996&rft_dat=%3Cproquest%3E2889798210%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a956-9ede33da37bc3bb2f526adf05ff04e15c248dbf3ae2ce746c8cf7271fe46d46c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2889798210&rft_id=info:pmid/&rfr_iscdi=true