Loading…

DSCAE: a denoising sparse convolutional autoencoder defense against adversarial examples

Deep neural networks are a state-of-the-art method used to computer vision. Imperceptible perturbations added to benign images can induce the deep learning network to make incorrect predictions, though the perturbation is imperceptible to human eyes. Those adversarial examples threaten the safety of...

Full description

Saved in:
Bibliographic Details
Published in:Journal of ambient intelligence and humanized computing 2022-03, Vol.13 (3), p.1419-1429
Main Authors: Ye, Hongwei, Liu, Xiaozhang, Li, Chunlai
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep neural networks are a state-of-the-art method used to computer vision. Imperceptible perturbations added to benign images can induce the deep learning network to make incorrect predictions, though the perturbation is imperceptible to human eyes. Those adversarial examples threaten the safety of deep learning model in many real-world applications. In this work, we proposed a method called denoising sparse convolutional autoencoder (DSCAE) to defense against the adversarial perturbations. This is a preprocessing module works before the classification model, which can remove substantial amounts of the adversarial noise. The DSCAE defense has been evaluated against FGSM, DeepFool, C & W , JSMA attacks on the MNIST and CIFAR-10 datasets. The experimental results show that DSCAE defends against state-of-art attacks effectively.
ISSN:1868-5137
1868-5145
DOI:10.1007/s12652-020-02642-3