Loading…

Singular Value Manipulating: An Effective DRL-Based Adversarial Attack on Deep Convolutional Neural Network

In recent years, deep convolutional neural networks (DCNNs) have become increasingly prevalent in image processing applications. However, DCNNs are vulnerable to adversarial attacks, which are generated by adding imperceptible perturbations to the input that can cause the network to misclassify the...

Full description

Saved in:
Bibliographic Details
Published in:Neural processing letters 2023-12, Vol.55 (9), p.12459-12480
Main Authors: He, Shuai, Fu, Cai, Feng, Guanyun, Lv, Jianqiang, Deng, Fengyang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, deep convolutional neural networks (DCNNs) have become increasingly prevalent in image processing applications. However, DCNNs are vulnerable to adversarial attacks, which are generated by adding imperceptible perturbations to the input that can cause the network to misclassify the image. In this study, we propose a black-box transferable adversarial attack method. The goal is to enhance the understanding of the vulnerability of these networks. Meanwhile, it could help develop more robust defenses against such attacks. This attack efficiently generates adversarial examples by manipulating the singular value matrix instead of directly perturbing pixels with complex noise. We utilize soft actor-critic to explore an optimal perturbation strategy. We perform extensive evaluations with VOC 2012, MS Coco 2017 datasets on object detection models, the MNIST dataset on image classification models, as well as the TT-100K dataset on a real-world case study to evaluate the proposed singular value manipulating attack (SVMA). Comparison results demonstrate that SVMA achieves a consistent query efficiency and attack ability on both one-stage detector Yolo and two-stage detector Faster R-CNN. Additionally, our case study demonstrates the adversarial examples of SVMA are effective in real-world scenarios. In the end, we propose a defense against such attacks.
ISSN:1370-4621
1573-773X
DOI:10.1007/s11063-023-11428-5