Adversarial Perturbation Defense on Deep Neural Networks

Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Advers...

Full description

Saved in:
Bibliographic Details
Published in:ACM computing surveys 2022-11, Vol.54 (8), p.1-36, Article 159
Main Authors: Zhang, Xingwei, Zheng, Xiaolong, Mao, Wenji
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!