Loading…

Vulnerable point detection and repair against adversarial attacks for convolutional neural networks

Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial exa...

Full description

Saved in:
Bibliographic Details
Published in:International journal of machine learning and cybernetics 2023-12, Vol.14 (12), p.4163-4192
Main Authors: Gao, Jie, Xia, Zhaoqiang, Dai, Jing, Dang, Chen, Jiang, Xiaoyue, Feng, Xiaoyi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.
ISSN:1868-8071
1868-808X
DOI:10.1007/s13042-023-01888-5