Loading…

MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning

Recent works have revealed that backdoor attacks against Deep Reinforcement Learning (DRL) could lead to abnormal action selections of the agent, which may result in failure or even catastrophe in crucial decision processes. However, existing attacks only consider single-agent RL systems, in which t...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on dependable and secure computing 2023-09, Vol.20 (5), p.1-11
Main Authors: Chen, Yanjiao, Zheng, Zhicong, Gong, Xueluan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c293t-41c861b38d0b9792bced7d007a2e77c4e0e57bf98b168b3b8384561dd3ce44bb3
cites cdi_FETCH-LOGICAL-c293t-41c861b38d0b9792bced7d007a2e77c4e0e57bf98b168b3b8384561dd3ce44bb3
container_end_page 11
container_issue 5
container_start_page 1
container_title IEEE transactions on dependable and secure computing
container_volume 20
creator Chen, Yanjiao
Zheng, Zhicong
Gong, Xueluan
description Recent works have revealed that backdoor attacks against Deep Reinforcement Learning (DRL) could lead to abnormal action selections of the agent, which may result in failure or even catastrophe in crucial decision processes. However, existing attacks only consider single-agent RL systems, in which the only agent can observe the global state and have full control of the decision process. In this paper, we explore a new backdoor attack paradigm in cooperative multi-agent reinforcement learning (CMARL) scenarios, where a group of agents coordinate with each other to achieve a common goal, while each agent can only observe the local state. In the proposed MARNet attack framework, we carefully design a pipeline of trigger design, action poisoning, and reward hacking modules to accommodate the cooperative multi-agent settings. In particular, as only a subset of agents can observe the triggers in their local observations, we maneuver their actions to the worst actions suggested by an expert policy model. Since the global reward in CMARL is aggregated by individual rewards from all agents, we propose to modify the reward in a way that boosts the bad actions of poisoned agents (agents who observe the triggers) but mitigates the influence on non-poisoned agents. We conduct extensive experiments on three classical CMARL algorithms VDN, COMA, and QMIX, in two popular CMARL games Predator Prey and SMAC. The results show that the baselines extended from single-agent DRL backdoor attacks seldom work in CMARL problems while MARNet performs well by reducing the utility under attack by nearly 100%. We apply fine-tuning as a potential defense against MARNet and demonstrate that fine-tuning cannot entirely eliminate the effect of the attack.
doi_str_mv 10.1109/TDSC.2022.3207429
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9894692</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9894692</ieee_id><sourcerecordid>2859717761</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-41c861b38d0b9792bced7d007a2e77c4e0e57bf98b168b3b8384561dd3ce44bb3</originalsourceid><addsrcrecordid>eNo9kN1LwzAUxYMoOKd_gPhS8LkzX20S32r9hM3BnM-hSW9H59bMJBP8723Z8OmeC-fcy_khdE3whBCs7paPH-WEYkonjGLBqTpBI6I4STEm8rTXGc_STAlyji5CWGNMuVR8hOazYvEO8T55qOxX7ZxPihh7GZJiVbVdiEnp3A58FdsfSGb7TWzTYgVdTBbQdo3zFrbDNoXKd223ukRnTbUJcHWcY_T5_LQsX9Pp_OWtLKappYrFlBMrc2KYrLFRQlFjoRY1xqKiIITlgCETplHSkFwaZiSTPMtJXTMLnBvDxuj2cHfn3fceQtRrt_dd_1JTOfQUIie9ixxc1rsQPDR659tt5X81wXrgpgdueuCmj9z6zM0h0wLAv1_1tHJF2R8c1WjW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2859717761</pqid></control><display><type>article</type><title>MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning</title><source>IEEE Xplore (Online service)</source><creator>Chen, Yanjiao ; Zheng, Zhicong ; Gong, Xueluan</creator><creatorcontrib>Chen, Yanjiao ; Zheng, Zhicong ; Gong, Xueluan</creatorcontrib><description>Recent works have revealed that backdoor attacks against Deep Reinforcement Learning (DRL) could lead to abnormal action selections of the agent, which may result in failure or even catastrophe in crucial decision processes. However, existing attacks only consider single-agent RL systems, in which the only agent can observe the global state and have full control of the decision process. In this paper, we explore a new backdoor attack paradigm in cooperative multi-agent reinforcement learning (CMARL) scenarios, where a group of agents coordinate with each other to achieve a common goal, while each agent can only observe the local state. In the proposed MARNet attack framework, we carefully design a pipeline of trigger design, action poisoning, and reward hacking modules to accommodate the cooperative multi-agent settings. In particular, as only a subset of agents can observe the triggers in their local observations, we maneuver their actions to the worst actions suggested by an expert policy model. Since the global reward in CMARL is aggregated by individual rewards from all agents, we propose to modify the reward in a way that boosts the bad actions of poisoned agents (agents who observe the triggers) but mitigates the influence on non-poisoned agents. We conduct extensive experiments on three classical CMARL algorithms VDN, COMA, and QMIX, in two popular CMARL games Predator Prey and SMAC. The results show that the baselines extended from single-agent DRL backdoor attacks seldom work in CMARL problems while MARNet performs well by reducing the utility under attack by nearly 100%. We apply fine-tuning as a potential defense against MARNet and demonstrate that fine-tuning cannot entirely eliminate the effect of the attack.</description><identifier>ISSN: 1545-5971</identifier><identifier>EISSN: 1941-0018</identifier><identifier>DOI: 10.1109/TDSC.2022.3207429</identifier><identifier>CODEN: ITDSCM</identifier><language>eng</language><publisher>Washington: IEEE</publisher><subject>Algorithms ; Backdoor attacks ; Catastrophic events ; Computer crime ; Convergence ; Deep learning ; Games ; multi-agent reinforcement learning ; Multiagent systems ; Pipeline design ; Predator prey systems ; Q-learning ; Task analysis ; Training</subject><ispartof>IEEE transactions on dependable and secure computing, 2023-09, Vol.20 (5), p.1-11</ispartof><rights>Copyright IEEE Computer Society 2023</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-41c861b38d0b9792bced7d007a2e77c4e0e57bf98b168b3b8384561dd3ce44bb3</citedby><cites>FETCH-LOGICAL-c293t-41c861b38d0b9792bced7d007a2e77c4e0e57bf98b168b3b8384561dd3ce44bb3</cites><orcidid>0000-0003-2190-8117 ; 0000-0002-1382-0679 ; 0000-0002-7298-0381</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9894692$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Chen, Yanjiao</creatorcontrib><creatorcontrib>Zheng, Zhicong</creatorcontrib><creatorcontrib>Gong, Xueluan</creatorcontrib><title>MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning</title><title>IEEE transactions on dependable and secure computing</title><addtitle>TDSC</addtitle><description>Recent works have revealed that backdoor attacks against Deep Reinforcement Learning (DRL) could lead to abnormal action selections of the agent, which may result in failure or even catastrophe in crucial decision processes. However, existing attacks only consider single-agent RL systems, in which the only agent can observe the global state and have full control of the decision process. In this paper, we explore a new backdoor attack paradigm in cooperative multi-agent reinforcement learning (CMARL) scenarios, where a group of agents coordinate with each other to achieve a common goal, while each agent can only observe the local state. In the proposed MARNet attack framework, we carefully design a pipeline of trigger design, action poisoning, and reward hacking modules to accommodate the cooperative multi-agent settings. In particular, as only a subset of agents can observe the triggers in their local observations, we maneuver their actions to the worst actions suggested by an expert policy model. Since the global reward in CMARL is aggregated by individual rewards from all agents, we propose to modify the reward in a way that boosts the bad actions of poisoned agents (agents who observe the triggers) but mitigates the influence on non-poisoned agents. We conduct extensive experiments on three classical CMARL algorithms VDN, COMA, and QMIX, in two popular CMARL games Predator Prey and SMAC. The results show that the baselines extended from single-agent DRL backdoor attacks seldom work in CMARL problems while MARNet performs well by reducing the utility under attack by nearly 100%. We apply fine-tuning as a potential defense against MARNet and demonstrate that fine-tuning cannot entirely eliminate the effect of the attack.</description><subject>Algorithms</subject><subject>Backdoor attacks</subject><subject>Catastrophic events</subject><subject>Computer crime</subject><subject>Convergence</subject><subject>Deep learning</subject><subject>Games</subject><subject>multi-agent reinforcement learning</subject><subject>Multiagent systems</subject><subject>Pipeline design</subject><subject>Predator prey systems</subject><subject>Q-learning</subject><subject>Task analysis</subject><subject>Training</subject><issn>1545-5971</issn><issn>1941-0018</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9kN1LwzAUxYMoOKd_gPhS8LkzX20S32r9hM3BnM-hSW9H59bMJBP8723Z8OmeC-fcy_khdE3whBCs7paPH-WEYkonjGLBqTpBI6I4STEm8rTXGc_STAlyji5CWGNMuVR8hOazYvEO8T55qOxX7ZxPihh7GZJiVbVdiEnp3A58FdsfSGb7TWzTYgVdTBbQdo3zFrbDNoXKd223ukRnTbUJcHWcY_T5_LQsX9Pp_OWtLKappYrFlBMrc2KYrLFRQlFjoRY1xqKiIITlgCETplHSkFwaZiSTPMtJXTMLnBvDxuj2cHfn3fceQtRrt_dd_1JTOfQUIie9ixxc1rsQPDR659tt5X81wXrgpgdueuCmj9z6zM0h0wLAv1_1tHJF2R8c1WjW</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Chen, Yanjiao</creator><creator>Zheng, Zhicong</creator><creator>Gong, Xueluan</creator><general>IEEE</general><general>IEEE Computer Society</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope><orcidid>https://orcid.org/0000-0003-2190-8117</orcidid><orcidid>https://orcid.org/0000-0002-1382-0679</orcidid><orcidid>https://orcid.org/0000-0002-7298-0381</orcidid></search><sort><creationdate>20230901</creationdate><title>MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning</title><author>Chen, Yanjiao ; Zheng, Zhicong ; Gong, Xueluan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-41c861b38d0b9792bced7d007a2e77c4e0e57bf98b168b3b8384561dd3ce44bb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Backdoor attacks</topic><topic>Catastrophic events</topic><topic>Computer crime</topic><topic>Convergence</topic><topic>Deep learning</topic><topic>Games</topic><topic>multi-agent reinforcement learning</topic><topic>Multiagent systems</topic><topic>Pipeline design</topic><topic>Predator prey systems</topic><topic>Q-learning</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Yanjiao</creatorcontrib><creatorcontrib>Zheng, Zhicong</creatorcontrib><creatorcontrib>Gong, Xueluan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>IEEE transactions on dependable and secure computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Yanjiao</au><au>Zheng, Zhicong</au><au>Gong, Xueluan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning</atitle><jtitle>IEEE transactions on dependable and secure computing</jtitle><stitle>TDSC</stitle><date>2023-09-01</date><risdate>2023</risdate><volume>20</volume><issue>5</issue><spage>1</spage><epage>11</epage><pages>1-11</pages><issn>1545-5971</issn><eissn>1941-0018</eissn><coden>ITDSCM</coden><abstract>Recent works have revealed that backdoor attacks against Deep Reinforcement Learning (DRL) could lead to abnormal action selections of the agent, which may result in failure or even catastrophe in crucial decision processes. However, existing attacks only consider single-agent RL systems, in which the only agent can observe the global state and have full control of the decision process. In this paper, we explore a new backdoor attack paradigm in cooperative multi-agent reinforcement learning (CMARL) scenarios, where a group of agents coordinate with each other to achieve a common goal, while each agent can only observe the local state. In the proposed MARNet attack framework, we carefully design a pipeline of trigger design, action poisoning, and reward hacking modules to accommodate the cooperative multi-agent settings. In particular, as only a subset of agents can observe the triggers in their local observations, we maneuver their actions to the worst actions suggested by an expert policy model. Since the global reward in CMARL is aggregated by individual rewards from all agents, we propose to modify the reward in a way that boosts the bad actions of poisoned agents (agents who observe the triggers) but mitigates the influence on non-poisoned agents. We conduct extensive experiments on three classical CMARL algorithms VDN, COMA, and QMIX, in two popular CMARL games Predator Prey and SMAC. The results show that the baselines extended from single-agent DRL backdoor attacks seldom work in CMARL problems while MARNet performs well by reducing the utility under attack by nearly 100%. We apply fine-tuning as a potential defense against MARNet and demonstrate that fine-tuning cannot entirely eliminate the effect of the attack.</abstract><cop>Washington</cop><pub>IEEE</pub><doi>10.1109/TDSC.2022.3207429</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-2190-8117</orcidid><orcidid>https://orcid.org/0000-0002-1382-0679</orcidid><orcidid>https://orcid.org/0000-0002-7298-0381</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1545-5971
ispartof IEEE transactions on dependable and secure computing, 2023-09, Vol.20 (5), p.1-11
issn 1545-5971
1941-0018
language eng
recordid cdi_ieee_primary_9894692
source IEEE Xplore (Online service)
subjects Algorithms
Backdoor attacks
Catastrophic events
Computer crime
Convergence
Deep learning
Games
multi-agent reinforcement learning
Multiagent systems
Pipeline design
Predator prey systems
Q-learning
Task analysis
Training
title MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T00%3A31%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MARNet:%20Backdoor%20Attacks%20Against%20Cooperative%20Multi-Agent%20Reinforcement%20Learning&rft.jtitle=IEEE%20transactions%20on%20dependable%20and%20secure%20computing&rft.au=Chen,%20Yanjiao&rft.date=2023-09-01&rft.volume=20&rft.issue=5&rft.spage=1&rft.epage=11&rft.pages=1-11&rft.issn=1545-5971&rft.eissn=1941-0018&rft.coden=ITDSCM&rft_id=info:doi/10.1109/TDSC.2022.3207429&rft_dat=%3Cproquest_ieee_%3E2859717761%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c293t-41c861b38d0b9792bced7d007a2e77c4e0e57bf98b168b3b8384561dd3ce44bb3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2859717761&rft_id=info:pmid/&rft_ieee_id=9894692&rfr_iscdi=true