Loading…
Adversarial attacks on Faster R-CNN object detector
Adversarial attacks have stimulated research interests in the field of deep learning security. However, most of existing adversarial attack methods are developed on classification. In this paper, we use Projected Gradient Descent (PGD), the strongest first-order attack method on classification, to p...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2020-03, Vol.382, p.87-95 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Adversarial attacks have stimulated research interests in the field of deep learning security. However, most of existing adversarial attack methods are developed on classification. In this paper, we use Projected Gradient Descent (PGD), the strongest first-order attack method on classification, to produce adversarial examples on the total loss of Faster R-CNN object detector. Compared with the state-of-the-art Dense Adversary Generation (DAG) method, our attack is more efficient and more powerful in both white-box and black-box attack settings, and is applicable in a variety of neural network architectures. On Pascal VOC2007, under white-box attack, DAG has 5.92% mAP on Faster R-CNN with VGG16 backbone using 41.42 iterations on average, while our method achieves 0.90% using only 4 iterations. We also analyze the difference of attacks between classification and detection, and find that in addition to misclassification, adversarial examples on detection also lead to mis-localization. Besides, we validate the adversarial effectiveness of both Region Proposal Network (RPN) and Fast R-CNN loss, the components of the total loss. Our research will provide inspiration for further efforts in adversarial attacks on other vision tasks. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2019.11.051 |