Loading…

Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light

With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied...

Full description

Saved in:
Bibliographic Details
Published in:Computers & security 2023-09, Vol.132, p.103345, Article 103345
Main Authors: LI, Yufeng, YANG, Fengyu, LIU, Qi, LI, Jiangtao, CAO, Chenhong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.
ISSN:0167-4048
1872-6208
DOI:10.1016/j.cose.2023.103345