Loading…

Towards Transferable Adversarial Attacks on Image and Video Transformers

The transferability of adversarial examples across different convolutional neural networks (CNNs) makes it feasible to perform black-box attacks, resulting in security threats for CNNs. However, fewer endeavors have been made to investigate transferable attacks for vision transformers (ViTs), which...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing 2023-01, Vol.32, p.1-1
Main Authors: Wei, Zhipeng, Chen, Jingjing, Goldblum, Micah, Wu, Zuxuan, Goldstein, Tom, Jiang, Yu-Gang, Davis, Larry S.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c278t-6bcfc5255574d91ae9b2430ecdc59fd8eae31359fe6879eb8aec9edce682254e3
container_end_page 1
container_issue
container_start_page 1
container_title IEEE transactions on image processing
container_volume 32
creator Wei, Zhipeng
Chen, Jingjing
Goldblum, Micah
Wu, Zuxuan
Goldstein, Tom
Jiang, Yu-Gang
Davis, Larry S.
description The transferability of adversarial examples across different convolutional neural networks (CNNs) makes it feasible to perform black-box attacks, resulting in security threats for CNNs. However, fewer endeavors have been made to investigate transferable attacks for vision transformers (ViTs), which achieve superior performance on various computer vision tasks. Unlike CNNs, ViTs establish relationships between patches extracted from inputs by the self-attention module. Thus, adversarial examples crafted on CNNs might hardly attack ViTs. To assess the security of ViTs comprehensively, we investigate the transferability across different ViTs in both untargetd and targeted scenarios. More specifically, we propose a Pay No Attention (PNA) attack, which ignores attention gradients during backpropagation to improve the linearity of backpropagation. Additionally, we introduce a PatchOut/CubeOut attack for image/video ViTs. They optimize perturbations within a randomly selected subset of patches/cubes during each iteration, preventing over-fitting to the white-box surrogate ViT model. Furthermore, we maximize the L2 norm of perturbations, ensuring that the generated adversarial examples deviate significantly from the benign ones. These strategies are designed to be harmoniously compatible. Combining them can enhance transferability by jointly considering patch-based inputs and the self-attention of ViTs. Moreover, the proposed combined attack seamlessly integrates with existing transferable attacks, providing an additional boost to transferability. We conduct experiments on ImageNet and Kinetics-400 for image and video ViTs, respectively. Experimental results demonstrate the effectiveness of the proposed method.
doi_str_mv 10.1109/TIP.2023.3331582
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10319323</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10319323</ieee_id><sourcerecordid>2892375616</sourcerecordid><originalsourceid>FETCH-LOGICAL-c278t-6bcfc5255574d91ae9b2430ecdc59fd8eae31359fe6879eb8aec9edce682254e3</originalsourceid><addsrcrecordid>eNpdkD1PwzAQhi0EEqWwMzBEYmFJ8Wccj1UFtFIlGAKr5dgXlJLExU5B_HtctQNiuvek5z2dHoSuCZ4RgtV9tXqZUUzZjDFGRElP0IQoTnKMOT1NGQuZS8LVObqIcYMx4YIUE7Ss_LcJLmZVMENsIJi6g2zuviBEE1rTZfNxNPYjZn7IVr15h8wMLntrHfhjx4c-wZforDFdhKvjnKLXx4dqsczXz0-rxXydWyrLMS9q21hBhRCSO0UMqJpyhsE6K1TjSjDACEsRilIqqEsDVoGzaaVUcGBTdHe4uw3-cwdx1H0bLXSdGcDvoqalwjIdVziht__Qjd-FIX23pyiToiBFovCBssHHGKDR29D2JvxogvVerU5q9V6tPqpNlZtDpQWAPzgjilHGfgFsD3TW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2892375616</pqid></control><display><type>article</type><title>Towards Transferable Adversarial Attacks on Image and Video Transformers</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Wei, Zhipeng ; Chen, Jingjing ; Goldblum, Micah ; Wu, Zuxuan ; Goldstein, Tom ; Jiang, Yu-Gang ; Davis, Larry S.</creator><creatorcontrib>Wei, Zhipeng ; Chen, Jingjing ; Goldblum, Micah ; Wu, Zuxuan ; Goldstein, Tom ; Jiang, Yu-Gang ; Davis, Larry S.</creatorcontrib><description>The transferability of adversarial examples across different convolutional neural networks (CNNs) makes it feasible to perform black-box attacks, resulting in security threats for CNNs. However, fewer endeavors have been made to investigate transferable attacks for vision transformers (ViTs), which achieve superior performance on various computer vision tasks. Unlike CNNs, ViTs establish relationships between patches extracted from inputs by the self-attention module. Thus, adversarial examples crafted on CNNs might hardly attack ViTs. To assess the security of ViTs comprehensively, we investigate the transferability across different ViTs in both untargetd and targeted scenarios. More specifically, we propose a Pay No Attention (PNA) attack, which ignores attention gradients during backpropagation to improve the linearity of backpropagation. Additionally, we introduce a PatchOut/CubeOut attack for image/video ViTs. They optimize perturbations within a randomly selected subset of patches/cubes during each iteration, preventing over-fitting to the white-box surrogate ViT model. Furthermore, we maximize the L2 norm of perturbations, ensuring that the generated adversarial examples deviate significantly from the benign ones. These strategies are designed to be harmoniously compatible. Combining them can enhance transferability by jointly considering patch-based inputs and the self-attention of ViTs. Moreover, the proposed combined attack seamlessly integrates with existing transferable attacks, providing an additional boost to transferability. We conduct experiments on ImageNet and Kinetics-400 for image and video ViTs, respectively. Experimental results demonstrate the effectiveness of the proposed method.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2023.3331582</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adversarial attack ; Artificial neural networks ; Back propagation ; Back propagation networks ; Backpropagation ; Closed box ; Computer vision ; Cubes ; Glass box ; Iterative methods ; Perturbation ; Perturbation methods ; Presence network agents ; Security ; Training ; Transferable attack ; Transformers ; Vision transformer</subject><ispartof>IEEE transactions on image processing, 2023-01, Vol.32, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c278t-6bcfc5255574d91ae9b2430ecdc59fd8eae31359fe6879eb8aec9edce682254e3</cites><orcidid>0000-0002-1907-8567 ; 0000-0001-6315-497X ; 0000-0002-8266-2424 ; 0000-0003-3148-264X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10319323$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,54771</link.rule.ids></links><search><creatorcontrib>Wei, Zhipeng</creatorcontrib><creatorcontrib>Chen, Jingjing</creatorcontrib><creatorcontrib>Goldblum, Micah</creatorcontrib><creatorcontrib>Wu, Zuxuan</creatorcontrib><creatorcontrib>Goldstein, Tom</creatorcontrib><creatorcontrib>Jiang, Yu-Gang</creatorcontrib><creatorcontrib>Davis, Larry S.</creatorcontrib><title>Towards Transferable Adversarial Attacks on Image and Video Transformers</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description>The transferability of adversarial examples across different convolutional neural networks (CNNs) makes it feasible to perform black-box attacks, resulting in security threats for CNNs. However, fewer endeavors have been made to investigate transferable attacks for vision transformers (ViTs), which achieve superior performance on various computer vision tasks. Unlike CNNs, ViTs establish relationships between patches extracted from inputs by the self-attention module. Thus, adversarial examples crafted on CNNs might hardly attack ViTs. To assess the security of ViTs comprehensively, we investigate the transferability across different ViTs in both untargetd and targeted scenarios. More specifically, we propose a Pay No Attention (PNA) attack, which ignores attention gradients during backpropagation to improve the linearity of backpropagation. Additionally, we introduce a PatchOut/CubeOut attack for image/video ViTs. They optimize perturbations within a randomly selected subset of patches/cubes during each iteration, preventing over-fitting to the white-box surrogate ViT model. Furthermore, we maximize the L2 norm of perturbations, ensuring that the generated adversarial examples deviate significantly from the benign ones. These strategies are designed to be harmoniously compatible. Combining them can enhance transferability by jointly considering patch-based inputs and the self-attention of ViTs. Moreover, the proposed combined attack seamlessly integrates with existing transferable attacks, providing an additional boost to transferability. We conduct experiments on ImageNet and Kinetics-400 for image and video ViTs, respectively. Experimental results demonstrate the effectiveness of the proposed method.</description><subject>Adversarial attack</subject><subject>Artificial neural networks</subject><subject>Back propagation</subject><subject>Back propagation networks</subject><subject>Backpropagation</subject><subject>Closed box</subject><subject>Computer vision</subject><subject>Cubes</subject><subject>Glass box</subject><subject>Iterative methods</subject><subject>Perturbation</subject><subject>Perturbation methods</subject><subject>Presence network agents</subject><subject>Security</subject><subject>Training</subject><subject>Transferable attack</subject><subject>Transformers</subject><subject>Vision transformer</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpdkD1PwzAQhi0EEqWwMzBEYmFJ8Wccj1UFtFIlGAKr5dgXlJLExU5B_HtctQNiuvek5z2dHoSuCZ4RgtV9tXqZUUzZjDFGRElP0IQoTnKMOT1NGQuZS8LVObqIcYMx4YIUE7Ss_LcJLmZVMENsIJi6g2zuviBEE1rTZfNxNPYjZn7IVr15h8wMLntrHfhjx4c-wZforDFdhKvjnKLXx4dqsczXz0-rxXydWyrLMS9q21hBhRCSO0UMqJpyhsE6K1TjSjDACEsRilIqqEsDVoGzaaVUcGBTdHe4uw3-cwdx1H0bLXSdGcDvoqalwjIdVziht__Qjd-FIX23pyiToiBFovCBssHHGKDR29D2JvxogvVerU5q9V6tPqpNlZtDpQWAPzgjilHGfgFsD3TW</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Wei, Zhipeng</creator><creator>Chen, Jingjing</creator><creator>Goldblum, Micah</creator><creator>Wu, Zuxuan</creator><creator>Goldstein, Tom</creator><creator>Jiang, Yu-Gang</creator><creator>Davis, Larry S.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1907-8567</orcidid><orcidid>https://orcid.org/0000-0001-6315-497X</orcidid><orcidid>https://orcid.org/0000-0002-8266-2424</orcidid><orcidid>https://orcid.org/0000-0003-3148-264X</orcidid></search><sort><creationdate>20230101</creationdate><title>Towards Transferable Adversarial Attacks on Image and Video Transformers</title><author>Wei, Zhipeng ; Chen, Jingjing ; Goldblum, Micah ; Wu, Zuxuan ; Goldstein, Tom ; Jiang, Yu-Gang ; Davis, Larry S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c278t-6bcfc5255574d91ae9b2430ecdc59fd8eae31359fe6879eb8aec9edce682254e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adversarial attack</topic><topic>Artificial neural networks</topic><topic>Back propagation</topic><topic>Back propagation networks</topic><topic>Backpropagation</topic><topic>Closed box</topic><topic>Computer vision</topic><topic>Cubes</topic><topic>Glass box</topic><topic>Iterative methods</topic><topic>Perturbation</topic><topic>Perturbation methods</topic><topic>Presence network agents</topic><topic>Security</topic><topic>Training</topic><topic>Transferable attack</topic><topic>Transformers</topic><topic>Vision transformer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wei, Zhipeng</creatorcontrib><creatorcontrib>Chen, Jingjing</creatorcontrib><creatorcontrib>Goldblum, Micah</creatorcontrib><creatorcontrib>Wu, Zuxuan</creatorcontrib><creatorcontrib>Goldstein, Tom</creatorcontrib><creatorcontrib>Jiang, Yu-Gang</creatorcontrib><creatorcontrib>Davis, Larry S.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wei, Zhipeng</au><au>Chen, Jingjing</au><au>Goldblum, Micah</au><au>Wu, Zuxuan</au><au>Goldstein, Tom</au><au>Jiang, Yu-Gang</au><au>Davis, Larry S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Transferable Adversarial Attacks on Image and Video Transformers</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>32</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>The transferability of adversarial examples across different convolutional neural networks (CNNs) makes it feasible to perform black-box attacks, resulting in security threats for CNNs. However, fewer endeavors have been made to investigate transferable attacks for vision transformers (ViTs), which achieve superior performance on various computer vision tasks. Unlike CNNs, ViTs establish relationships between patches extracted from inputs by the self-attention module. Thus, adversarial examples crafted on CNNs might hardly attack ViTs. To assess the security of ViTs comprehensively, we investigate the transferability across different ViTs in both untargetd and targeted scenarios. More specifically, we propose a Pay No Attention (PNA) attack, which ignores attention gradients during backpropagation to improve the linearity of backpropagation. Additionally, we introduce a PatchOut/CubeOut attack for image/video ViTs. They optimize perturbations within a randomly selected subset of patches/cubes during each iteration, preventing over-fitting to the white-box surrogate ViT model. Furthermore, we maximize the L2 norm of perturbations, ensuring that the generated adversarial examples deviate significantly from the benign ones. These strategies are designed to be harmoniously compatible. Combining them can enhance transferability by jointly considering patch-based inputs and the self-attention of ViTs. Moreover, the proposed combined attack seamlessly integrates with existing transferable attacks, providing an additional boost to transferability. We conduct experiments on ImageNet and Kinetics-400 for image and video ViTs, respectively. Experimental results demonstrate the effectiveness of the proposed method.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIP.2023.3331582</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-1907-8567</orcidid><orcidid>https://orcid.org/0000-0001-6315-497X</orcidid><orcidid>https://orcid.org/0000-0002-8266-2424</orcidid><orcidid>https://orcid.org/0000-0003-3148-264X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2023-01, Vol.32, p.1-1
issn 1057-7149
1941-0042
language eng
recordid cdi_ieee_primary_10319323
source IEEE Electronic Library (IEL) Journals
subjects Adversarial attack
Artificial neural networks
Back propagation
Back propagation networks
Backpropagation
Closed box
Computer vision
Cubes
Glass box
Iterative methods
Perturbation
Perturbation methods
Presence network agents
Security
Training
Transferable attack
Transformers
Vision transformer
title Towards Transferable Adversarial Attacks on Image and Video Transformers
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T15%3A40%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Transferable%20Adversarial%20Attacks%20on%20Image%20and%20Video%20Transformers&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Wei,%20Zhipeng&rft.date=2023-01-01&rft.volume=32&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2023.3331582&rft_dat=%3Cproquest_ieee_%3E2892375616%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c278t-6bcfc5255574d91ae9b2430ecdc59fd8eae31359fe6879eb8aec9edce682254e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2892375616&rft_id=info:pmid/&rft_ieee_id=10319323&rfr_iscdi=true