Loading…
An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack
In this paper, we empirically analyze adversarial attacks on selected federated learning models. The specific learning models considered are Multinominal Logistic Regression (MLR), Support Vector Classifier (SVC), Multilayer Perceptron (MLP), Convolution Neural Network (CNN), %Recurrent Neural Netwo...
Saved in:
Published in: | arXiv.org 2024-12 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Bhatnagar, Kunal Chattanathan, Sagana Dang, Angela Eranki, Bhargav Rana, Ronnit Charan Sridhar Vedam, Siddharth Yao, Angie Stamp, Mark |
description | In this paper, we empirically analyze adversarial attacks on selected federated learning models. The specific learning models considered are Multinominal Logistic Regression (MLR), Support Vector Classifier (SVC), Multilayer Perceptron (MLP), Convolution Neural Network (CNN), %Recurrent Neural Network (RNN), Random Forest, XGBoost, and Long Short-Term Memory (LSTM). For each model, we simulate label-flipping attacks, experimenting extensively with 10 federated clients and 100 federated clients. We vary the percentage of adversarial clients from 10% to 100% and, simultaneously, the percentage of labels flipped by each adversarial client is also varied from 10% to 100%. Among other results, we find that models differ in their inherent robustness to the two vectors in our label-flipping attack, i.e., the percentage of adversarial clients, and the percentage of labels flipped by each adversarial client. We discuss the potential practical implications of our results. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3149108748</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3149108748</sourcerecordid><originalsourceid>FETCH-proquest_journals_31491087483</originalsourceid><addsrcrecordid>eNqNjcEKgkAUAJcgSMp_eNBZ0F1NO0ooHexUd1n1GWuba_vWoL-voA_oNJdhZsE8LkQUZDHnK-YTDWEY8l3Kk0R4rM5HKO6TsqqVGvJR6hcpAtNDiR1a6bCDCqUd1XiFk-lQE5znZsDWgTNQyQZ1UGo1TV8h755oSVr1bTkn29uGLXupCf0f12xbFpfDMZisecxIrh7MbD9bqkUU76MwS-NM_Ge9AS8bQ9M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3149108748</pqid></control><display><type>article</type><title>An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack</title><source>Publicly Available Content Database</source><creator>Bhatnagar, Kunal ; Chattanathan, Sagana ; Dang, Angela ; Eranki, Bhargav ; Rana, Ronnit ; Charan Sridhar ; Vedam, Siddharth ; Yao, Angie ; Stamp, Mark</creator><creatorcontrib>Bhatnagar, Kunal ; Chattanathan, Sagana ; Dang, Angela ; Eranki, Bhargav ; Rana, Ronnit ; Charan Sridhar ; Vedam, Siddharth ; Yao, Angie ; Stamp, Mark</creatorcontrib><description>In this paper, we empirically analyze adversarial attacks on selected federated learning models. The specific learning models considered are Multinominal Logistic Regression (MLR), Support Vector Classifier (SVC), Multilayer Perceptron (MLP), Convolution Neural Network (CNN), %Recurrent Neural Network (RNN), Random Forest, XGBoost, and Long Short-Term Memory (LSTM). For each model, we simulate label-flipping attacks, experimenting extensively with 10 federated clients and 100 federated clients. We vary the percentage of adversarial clients from 10% to 100% and, simultaneously, the percentage of labels flipped by each adversarial client is also varied from 10% to 100%. Among other results, we find that models differ in their inherent robustness to the two vectors in our label-flipping attack, i.e., the percentage of adversarial clients, and the percentage of labels flipped by each adversarial client. We discuss the potential practical implications of our results.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Clients ; Empirical analysis ; Federated learning ; Labels ; Machine learning ; Multilayer perceptrons ; Recurrent neural networks</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3149108748?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Bhatnagar, Kunal</creatorcontrib><creatorcontrib>Chattanathan, Sagana</creatorcontrib><creatorcontrib>Dang, Angela</creatorcontrib><creatorcontrib>Eranki, Bhargav</creatorcontrib><creatorcontrib>Rana, Ronnit</creatorcontrib><creatorcontrib>Charan Sridhar</creatorcontrib><creatorcontrib>Vedam, Siddharth</creatorcontrib><creatorcontrib>Yao, Angie</creatorcontrib><creatorcontrib>Stamp, Mark</creatorcontrib><title>An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack</title><title>arXiv.org</title><description>In this paper, we empirically analyze adversarial attacks on selected federated learning models. The specific learning models considered are Multinominal Logistic Regression (MLR), Support Vector Classifier (SVC), Multilayer Perceptron (MLP), Convolution Neural Network (CNN), %Recurrent Neural Network (RNN), Random Forest, XGBoost, and Long Short-Term Memory (LSTM). For each model, we simulate label-flipping attacks, experimenting extensively with 10 federated clients and 100 federated clients. We vary the percentage of adversarial clients from 10% to 100% and, simultaneously, the percentage of labels flipped by each adversarial client is also varied from 10% to 100%. Among other results, we find that models differ in their inherent robustness to the two vectors in our label-flipping attack, i.e., the percentage of adversarial clients, and the percentage of labels flipped by each adversarial client. We discuss the potential practical implications of our results.</description><subject>Artificial neural networks</subject><subject>Clients</subject><subject>Empirical analysis</subject><subject>Federated learning</subject><subject>Labels</subject><subject>Machine learning</subject><subject>Multilayer perceptrons</subject><subject>Recurrent neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjcEKgkAUAJcgSMp_eNBZ0F1NO0ooHexUd1n1GWuba_vWoL-voA_oNJdhZsE8LkQUZDHnK-YTDWEY8l3Kk0R4rM5HKO6TsqqVGvJR6hcpAtNDiR1a6bCDCqUd1XiFk-lQE5znZsDWgTNQyQZ1UGo1TV8h755oSVr1bTkn29uGLXupCf0f12xbFpfDMZisecxIrh7MbD9bqkUU76MwS-NM_Ge9AS8bQ9M</recordid><startdate>20241224</startdate><enddate>20241224</enddate><creator>Bhatnagar, Kunal</creator><creator>Chattanathan, Sagana</creator><creator>Dang, Angela</creator><creator>Eranki, Bhargav</creator><creator>Rana, Ronnit</creator><creator>Charan Sridhar</creator><creator>Vedam, Siddharth</creator><creator>Yao, Angie</creator><creator>Stamp, Mark</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241224</creationdate><title>An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack</title><author>Bhatnagar, Kunal ; Chattanathan, Sagana ; Dang, Angela ; Eranki, Bhargav ; Rana, Ronnit ; Charan Sridhar ; Vedam, Siddharth ; Yao, Angie ; Stamp, Mark</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31491087483</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Clients</topic><topic>Empirical analysis</topic><topic>Federated learning</topic><topic>Labels</topic><topic>Machine learning</topic><topic>Multilayer perceptrons</topic><topic>Recurrent neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Bhatnagar, Kunal</creatorcontrib><creatorcontrib>Chattanathan, Sagana</creatorcontrib><creatorcontrib>Dang, Angela</creatorcontrib><creatorcontrib>Eranki, Bhargav</creatorcontrib><creatorcontrib>Rana, Ronnit</creatorcontrib><creatorcontrib>Charan Sridhar</creatorcontrib><creatorcontrib>Vedam, Siddharth</creatorcontrib><creatorcontrib>Yao, Angie</creatorcontrib><creatorcontrib>Stamp, Mark</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bhatnagar, Kunal</au><au>Chattanathan, Sagana</au><au>Dang, Angela</au><au>Eranki, Bhargav</au><au>Rana, Ronnit</au><au>Charan Sridhar</au><au>Vedam, Siddharth</au><au>Yao, Angie</au><au>Stamp, Mark</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack</atitle><jtitle>arXiv.org</jtitle><date>2024-12-24</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In this paper, we empirically analyze adversarial attacks on selected federated learning models. The specific learning models considered are Multinominal Logistic Regression (MLR), Support Vector Classifier (SVC), Multilayer Perceptron (MLP), Convolution Neural Network (CNN), %Recurrent Neural Network (RNN), Random Forest, XGBoost, and Long Short-Term Memory (LSTM). For each model, we simulate label-flipping attacks, experimenting extensively with 10 federated clients and 100 federated clients. We vary the percentage of adversarial clients from 10% to 100% and, simultaneously, the percentage of labels flipped by each adversarial client is also varied from 10% to 100%. Among other results, we find that models differ in their inherent robustness to the two vectors in our label-flipping attack, i.e., the percentage of adversarial clients, and the percentage of labels flipped by each adversarial client. We discuss the potential practical implications of our results.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3149108748 |
source | Publicly Available Content Database |
subjects | Artificial neural networks Clients Empirical analysis Federated learning Labels Machine learning Multilayer perceptrons Recurrent neural networks |
title | An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T10%3A22%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=An%20Empirical%20Analysis%20of%20Federated%20Learning%20Models%20Subject%20to%20Label-Flipping%20Adversarial%20Attack&rft.jtitle=arXiv.org&rft.au=Bhatnagar,%20Kunal&rft.date=2024-12-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3149108748%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31491087483%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3149108748&rft_id=info:pmid/&rfr_iscdi=true |