Loading…
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more c...
Saved in:
Published in: | arXiv.org 2019-05 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Shang-Tse, Chen Cornelius, Cory Martin, Jason Duen Horng Chau |
description | Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems. |
doi_str_mv | 10.48550/arxiv.1804.05810 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2072031331</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2072031331</sourcerecordid><originalsourceid>FETCH-LOGICAL-a521-e45e7bdba44b938592b61fe6a6fa2ca5ac0fc87d241d7999e03621bd52168093</originalsourceid><addsrcrecordid>eNotj0tLw0AUhQdBsNT-AHcDrhPvPDNxV6JVobSlcV_uJBOSWJo6Myn6743o6juL8-AQcscglUYpeED_1V1SZkCmoAyDKzLjQrDESM5vyCKEHgC4zrhSYkZ2ZYtnV7ZdE51_pPvBjiHSXfsdugqPdFlfnA_ou18dI1YfdDjRFYbJTfdJsdnQre1dFemTixMGf0uuGzwGt_jnnJSr5_fiNVlvX96K5TpBxVnipHKZrS1KaXNhVM6tZo3TqBvkFSqsoKlMVnPJ6izPcwdCc2brKasN5GJO7v9az374HF2Ih34Y_WkaPHDIOAg2XRY_NClPMg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2072031331</pqid></control><display><type>article</type><title>ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Shang-Tse, Chen ; Cornelius, Cory ; Martin, Jason ; Duen Horng Chau</creator><creatorcontrib>Shang-Tse, Chen ; Cornelius, Cory ; Martin, Jason ; Duen Horng Chau</creatorcontrib><description>Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1804.05810</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Classifiers ; Computer vision ; Digital imaging ; Image classification ; Image detection ; Image enhancement ; Image manipulation ; Neural networks ; Object recognition ; Perturbation ; Safety critical ; Vision systems</subject><ispartof>arXiv.org, 2019-05</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2072031331?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Shang-Tse, Chen</creatorcontrib><creatorcontrib>Cornelius, Cory</creatorcontrib><creatorcontrib>Martin, Jason</creatorcontrib><creatorcontrib>Duen Horng Chau</creatorcontrib><title>ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector</title><title>arXiv.org</title><description>Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.</description><subject>Artificial neural networks</subject><subject>Classifiers</subject><subject>Computer vision</subject><subject>Digital imaging</subject><subject>Image classification</subject><subject>Image detection</subject><subject>Image enhancement</subject><subject>Image manipulation</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Perturbation</subject><subject>Safety critical</subject><subject>Vision systems</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotj0tLw0AUhQdBsNT-AHcDrhPvPDNxV6JVobSlcV_uJBOSWJo6Myn6743o6juL8-AQcscglUYpeED_1V1SZkCmoAyDKzLjQrDESM5vyCKEHgC4zrhSYkZ2ZYtnV7ZdE51_pPvBjiHSXfsdugqPdFlfnA_ou18dI1YfdDjRFYbJTfdJsdnQre1dFemTixMGf0uuGzwGt_jnnJSr5_fiNVlvX96K5TpBxVnipHKZrS1KaXNhVM6tZo3TqBvkFSqsoKlMVnPJ6izPcwdCc2brKasN5GJO7v9az374HF2Ih34Y_WkaPHDIOAg2XRY_NClPMg</recordid><startdate>20190501</startdate><enddate>20190501</enddate><creator>Shang-Tse, Chen</creator><creator>Cornelius, Cory</creator><creator>Martin, Jason</creator><creator>Duen Horng Chau</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190501</creationdate><title>ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector</title><author>Shang-Tse, Chen ; Cornelius, Cory ; Martin, Jason ; Duen Horng Chau</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a521-e45e7bdba44b938592b61fe6a6fa2ca5ac0fc87d241d7999e03621bd52168093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial neural networks</topic><topic>Classifiers</topic><topic>Computer vision</topic><topic>Digital imaging</topic><topic>Image classification</topic><topic>Image detection</topic><topic>Image enhancement</topic><topic>Image manipulation</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Perturbation</topic><topic>Safety critical</topic><topic>Vision systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Shang-Tse, Chen</creatorcontrib><creatorcontrib>Cornelius, Cory</creatorcontrib><creatorcontrib>Martin, Jason</creatorcontrib><creatorcontrib>Duen Horng Chau</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shang-Tse, Chen</au><au>Cornelius, Cory</au><au>Martin, Jason</au><au>Duen Horng Chau</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector</atitle><jtitle>arXiv.org</jtitle><date>2019-05-01</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1804.05810</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2019-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2072031331 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Artificial neural networks Classifiers Computer vision Digital imaging Image classification Image detection Image enhancement Image manipulation Neural networks Object recognition Perturbation Safety critical Vision systems |
title | ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T04%3A14%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ShapeShifter:%20Robust%20Physical%20Adversarial%20Attack%20on%20Faster%20R-CNN%20Object%20Detector&rft.jtitle=arXiv.org&rft.au=Shang-Tse,%20Chen&rft.date=2019-05-01&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1804.05810&rft_dat=%3Cproquest%3E2072031331%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a521-e45e7bdba44b938592b61fe6a6fa2ca5ac0fc87d241d7999e03621bd52168093%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2072031331&rft_id=info:pmid/&rfr_iscdi=true |