Adversarial Perturbation Defense on Deep Neural Networks
Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Advers...
Saved in:
| Published in: | ACM computing surveys 2022-11, Vol.54 (8), p.1-36, Article 159 |
|---|---|
| Main Authors: | , , |
| Format: | Article |
| Language: | English |
| Subjects: | |
| Citations: | Items that this one cites Items that cite this one |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|