Loading…

Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light

With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied...

Full description

Saved in:
Bibliographic Details
Published in:Computers & security 2023-09, Vol.132, p.103345, Article 103345
Main Authors: LI, Yufeng, YANG, Fengyu, LIU, Qi, LI, Jiangtao, CAO, Chenhong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c251t-3991a5cf08b5331009cfdeacbb03b7cf3c79d52e6a7d8771bcce75bdce1b8b913
container_end_page
container_issue
container_start_page 103345
container_title Computers & security
container_volume 132
creator LI, Yufeng
YANG, Fengyu
LIU, Qi
LI, Jiangtao
CAO, Chenhong
description With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.
doi_str_mv 10.1016/j.cose.2023.103345
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_cose_2023_103345</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0167404823002559</els_id><sourcerecordid>S0167404823002559</sourcerecordid><originalsourceid>FETCH-LOGICAL-c251t-3991a5cf08b5331009cfdeacbb03b7cf3c79d52e6a7d8771bcce75bdce1b8b913</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EEqXwA6z8Ayl23MQOYlOVp1QJpMIWyx5PWpeQVLYpyt-TUtasRrqaczVzCLnkbMIZL682E-giTnKWiyEQYlockRFXMs_KnKljMhqWZDZlU3VKzmLcMMZlqdSIvC_8ap0omJZapLemXWHovuI1XSY0TVr31LSO3tU1QvI7pC_rPnowTfbdhcbRmdthiCZ409BZSgY-qO3pctsl-lt8Tk5q00S8-Jtj8nZ_9zp_zBbPD0_z2SKDvOApE1XFTQE1U7YQgjNWQe3QgLVMWAm1AFm5IsfSSKek5BYAZWEdILfKVlyMSX7ohdDFGLDW2-A_Teg1Z3pvSG_03pDeG9IHQwN0c4BwuGznMegIHltA58Pwrnad_w__AbJHcIg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light</title><source>ScienceDirect Freedom Collection</source><creator>LI, Yufeng ; YANG, Fengyu ; LIU, Qi ; LI, Jiangtao ; CAO, Chenhong</creator><creatorcontrib>LI, Yufeng ; YANG, Fengyu ; LIU, Qi ; LI, Jiangtao ; CAO, Chenhong</creatorcontrib><description>With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.</description><identifier>ISSN: 0167-4048</identifier><identifier>EISSN: 1872-6208</identifier><identifier>DOI: 10.1016/j.cose.2023.103345</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Adversarial attack ; Heuristic optimization algorithm ; Image classification ; Machine learning ; Physical adversarial example</subject><ispartof>Computers &amp; security, 2023-09, Vol.132, p.103345, Article 103345</ispartof><rights>2023 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c251t-3991a5cf08b5331009cfdeacbb03b7cf3c79d52e6a7d8771bcce75bdce1b8b913</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>LI, Yufeng</creatorcontrib><creatorcontrib>YANG, Fengyu</creatorcontrib><creatorcontrib>LIU, Qi</creatorcontrib><creatorcontrib>LI, Jiangtao</creatorcontrib><creatorcontrib>CAO, Chenhong</creatorcontrib><title>Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light</title><title>Computers &amp; security</title><description>With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.</description><subject>Adversarial attack</subject><subject>Heuristic optimization algorithm</subject><subject>Image classification</subject><subject>Machine learning</subject><subject>Physical adversarial example</subject><issn>0167-4048</issn><issn>1872-6208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRS0EEqXwA6z8Ayl23MQOYlOVp1QJpMIWyx5PWpeQVLYpyt-TUtasRrqaczVzCLnkbMIZL682E-giTnKWiyEQYlockRFXMs_KnKljMhqWZDZlU3VKzmLcMMZlqdSIvC_8ap0omJZapLemXWHovuI1XSY0TVr31LSO3tU1QvI7pC_rPnowTfbdhcbRmdthiCZ409BZSgY-qO3pctsl-lt8Tk5q00S8-Jtj8nZ_9zp_zBbPD0_z2SKDvOApE1XFTQE1U7YQgjNWQe3QgLVMWAm1AFm5IsfSSKek5BYAZWEdILfKVlyMSX7ohdDFGLDW2-A_Teg1Z3pvSG_03pDeG9IHQwN0c4BwuGznMegIHltA58Pwrnad_w__AbJHcIg</recordid><startdate>202309</startdate><enddate>202309</enddate><creator>LI, Yufeng</creator><creator>YANG, Fengyu</creator><creator>LIU, Qi</creator><creator>LI, Jiangtao</creator><creator>CAO, Chenhong</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202309</creationdate><title>Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light</title><author>LI, Yufeng ; YANG, Fengyu ; LIU, Qi ; LI, Jiangtao ; CAO, Chenhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c251t-3991a5cf08b5331009cfdeacbb03b7cf3c79d52e6a7d8771bcce75bdce1b8b913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adversarial attack</topic><topic>Heuristic optimization algorithm</topic><topic>Image classification</topic><topic>Machine learning</topic><topic>Physical adversarial example</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>LI, Yufeng</creatorcontrib><creatorcontrib>YANG, Fengyu</creatorcontrib><creatorcontrib>LIU, Qi</creatorcontrib><creatorcontrib>LI, Jiangtao</creatorcontrib><creatorcontrib>CAO, Chenhong</creatorcontrib><collection>CrossRef</collection><jtitle>Computers &amp; security</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>LI, Yufeng</au><au>YANG, Fengyu</au><au>LIU, Qi</au><au>LI, Jiangtao</au><au>CAO, Chenhong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light</atitle><jtitle>Computers &amp; security</jtitle><date>2023-09</date><risdate>2023</risdate><volume>132</volume><spage>103345</spage><pages>103345-</pages><artnum>103345</artnum><issn>0167-4048</issn><eissn>1872-6208</eissn><abstract>With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.cose.2023.103345</doi></addata></record>
fulltext fulltext
identifier ISSN: 0167-4048
ispartof Computers & security, 2023-09, Vol.132, p.103345, Article 103345
issn 0167-4048
1872-6208
language eng
recordid cdi_crossref_primary_10_1016_j_cose_2023_103345
source ScienceDirect Freedom Collection
subjects Adversarial attack
Heuristic optimization algorithm
Image classification
Machine learning
Physical adversarial example
title Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T05%3A59%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Light%20can%20be%20Dangerous:%20Stealthy%20and%20Effective%20Physical-world%20Adversarial%20Attack%20by%20Spot%20Light&rft.jtitle=Computers%20&%20security&rft.au=LI,%20Yufeng&rft.date=2023-09&rft.volume=132&rft.spage=103345&rft.pages=103345-&rft.artnum=103345&rft.issn=0167-4048&rft.eissn=1872-6208&rft_id=info:doi/10.1016/j.cose.2023.103345&rft_dat=%3Celsevier_cross%3ES0167404823002559%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c251t-3991a5cf08b5331009cfdeacbb03b7cf3c79d52e6a7d8771bcce75bdce1b8b913%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true