Loading…

An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models

Nowadays, autonomous driving has attracted much attention from both industry and academia. Convolutional neural network (CNN) is a key component in autonomous driving, which is also increasingly adopted in pervasive computing such as smartphones, wearable devices, and IoT networks. Prior work shows...

Full description

Saved in:
Bibliographic Details
Main Authors: Deng, Yao, Zheng, Xi, Zhang, Tianyi, Chen, Chen, Lou, Guannan, Kim, Miryung
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c258t-40eba4b5559c33b6c8ce4beced587e93a7a0b526f10d1d311e01cc265fbfa84b3
cites
container_end_page 10
container_issue
container_start_page 1
container_title
container_volume
creator Deng, Yao
Zheng, Xi
Zhang, Tianyi
Chen, Chen
Lou, Guannan
Kim, Miryung
description Nowadays, autonomous driving has attracted much attention from both industry and academia. Convolutional neural network (CNN) is a key component in autonomous driving, which is also increasingly adopted in pervasive computing such as smartphones, wearable devices, and IoT networks. Prior work shows CNN-based classification models are vulnerable to adversarial attacks. However, it is uncertain to what extent regression models such as driving models are vulnerable to adversarial attacks, the effectiveness of existing defense techniques, and the defense implications for system and middleware builders.This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models. Experiments show that, similar to classification models, these models are still highly vulnerable to adversarial attacks. This poses a big security threat to autonomous driving and thus should be taken into account in practice. While these defense methods can effectively defend against different attacks, none of them are able to provide adequate protection against all five attacks. We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve a good coverage of various attacks, (2) a black-box attack is much less effective compared with a white-box attack, implying that it is important to keep model details (e.g., model architecture, hyperparameters) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models.
doi_str_mv 10.1109/PerCom45495.2020.9127389
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9127389</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9127389</ieee_id><sourcerecordid>9127389</sourcerecordid><originalsourceid>FETCH-LOGICAL-c258t-40eba4b5559c33b6c8ce4beced587e93a7a0b526f10d1d311e01cc265fbfa84b3</originalsourceid><addsrcrecordid>eNotj81KAzEYAKMgWGufwEteYGt-N8lx2aoVVvSg4K3k54tEd7Oy2Rb69hbsaS7DwCCEKVlTSsz9G0ztOAgpjFwzwsjaUKa4NhdoZZSmimkqaqnUJVowoUTFhPm8RjelfBMiDTFigbom4ybb_lhSwWPETTjAVOyUbI-bebb-p2CbA95AhFzg5Jz8_TzmcRj3BW-mdEj5C7-MAfpyi66i7Quszlyij8eH93Zbda9Pz23TVZ5JPVeCgLPCSSmN59zVXnsQDjwEqRUYbpUlTrI6UhJo4JQCod6zWkYXrRaOL9HdfzcBwO53SoOdjrvzPP8DeClQtQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models</title><source>IEEE Xplore All Conference Series</source><creator>Deng, Yao ; Zheng, Xi ; Zhang, Tianyi ; Chen, Chen ; Lou, Guannan ; Kim, Miryung</creator><creatorcontrib>Deng, Yao ; Zheng, Xi ; Zhang, Tianyi ; Chen, Chen ; Lou, Guannan ; Kim, Miryung</creatorcontrib><description>Nowadays, autonomous driving has attracted much attention from both industry and academia. Convolutional neural network (CNN) is a key component in autonomous driving, which is also increasingly adopted in pervasive computing such as smartphones, wearable devices, and IoT networks. Prior work shows CNN-based classification models are vulnerable to adversarial attacks. However, it is uncertain to what extent regression models such as driving models are vulnerable to adversarial attacks, the effectiveness of existing defense techniques, and the defense implications for system and middleware builders.This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models. Experiments show that, similar to classification models, these models are still highly vulnerable to adversarial attacks. This poses a big security threat to autonomous driving and thus should be taken into account in practice. While these defense methods can effectively defend against different attacks, none of them are able to provide adequate protection against all five attacks. We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve a good coverage of various attacks, (2) a black-box attack is much less effective compared with a white-box attack, implying that it is important to keep model details (e.g., model architecture, hyperparameters) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models.</description><identifier>EISSN: 2474-249X</identifier><identifier>EISBN: 9781728146577</identifier><identifier>EISBN: 1728146577</identifier><identifier>DOI: 10.1109/PerCom45495.2020.9127389</identifier><language>eng</language><publisher>IEEE</publisher><subject>adversarial attack ; Analytical models ; Autonomous driving ; Computational modeling ; Computer architecture ; Convolutional neural networks ; defense ; Information retrieval ; Security ; Wearable computers</subject><ispartof>2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), 2020, p.1-10</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c258t-40eba4b5559c33b6c8ce4beced587e93a7a0b526f10d1d311e01cc265fbfa84b3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9127389$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27902,54530,54907</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9127389$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Deng, Yao</creatorcontrib><creatorcontrib>Zheng, Xi</creatorcontrib><creatorcontrib>Zhang, Tianyi</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Lou, Guannan</creatorcontrib><creatorcontrib>Kim, Miryung</creatorcontrib><title>An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models</title><title>2020 IEEE International Conference on Pervasive Computing and Communications (PerCom)</title><addtitle>PerCom</addtitle><description>Nowadays, autonomous driving has attracted much attention from both industry and academia. Convolutional neural network (CNN) is a key component in autonomous driving, which is also increasingly adopted in pervasive computing such as smartphones, wearable devices, and IoT networks. Prior work shows CNN-based classification models are vulnerable to adversarial attacks. However, it is uncertain to what extent regression models such as driving models are vulnerable to adversarial attacks, the effectiveness of existing defense techniques, and the defense implications for system and middleware builders.This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models. Experiments show that, similar to classification models, these models are still highly vulnerable to adversarial attacks. This poses a big security threat to autonomous driving and thus should be taken into account in practice. While these defense methods can effectively defend against different attacks, none of them are able to provide adequate protection against all five attacks. We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve a good coverage of various attacks, (2) a black-box attack is much less effective compared with a white-box attack, implying that it is important to keep model details (e.g., model architecture, hyperparameters) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models.</description><subject>adversarial attack</subject><subject>Analytical models</subject><subject>Autonomous driving</subject><subject>Computational modeling</subject><subject>Computer architecture</subject><subject>Convolutional neural networks</subject><subject>defense</subject><subject>Information retrieval</subject><subject>Security</subject><subject>Wearable computers</subject><issn>2474-249X</issn><isbn>9781728146577</isbn><isbn>1728146577</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj81KAzEYAKMgWGufwEteYGt-N8lx2aoVVvSg4K3k54tEd7Oy2Rb69hbsaS7DwCCEKVlTSsz9G0ztOAgpjFwzwsjaUKa4NhdoZZSmimkqaqnUJVowoUTFhPm8RjelfBMiDTFigbom4ybb_lhSwWPETTjAVOyUbI-bebb-p2CbA95AhFzg5Jz8_TzmcRj3BW-mdEj5C7-MAfpyi66i7Quszlyij8eH93Zbda9Pz23TVZ5JPVeCgLPCSSmN59zVXnsQDjwEqRUYbpUlTrI6UhJo4JQCod6zWkYXrRaOL9HdfzcBwO53SoOdjrvzPP8DeClQtQ</recordid><startdate>202003</startdate><enddate>202003</enddate><creator>Deng, Yao</creator><creator>Zheng, Xi</creator><creator>Zhang, Tianyi</creator><creator>Chen, Chen</creator><creator>Lou, Guannan</creator><creator>Kim, Miryung</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>202003</creationdate><title>An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models</title><author>Deng, Yao ; Zheng, Xi ; Zhang, Tianyi ; Chen, Chen ; Lou, Guannan ; Kim, Miryung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c258t-40eba4b5559c33b6c8ce4beced587e93a7a0b526f10d1d311e01cc265fbfa84b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>adversarial attack</topic><topic>Analytical models</topic><topic>Autonomous driving</topic><topic>Computational modeling</topic><topic>Computer architecture</topic><topic>Convolutional neural networks</topic><topic>defense</topic><topic>Information retrieval</topic><topic>Security</topic><topic>Wearable computers</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Yao</creatorcontrib><creatorcontrib>Zheng, Xi</creatorcontrib><creatorcontrib>Zhang, Tianyi</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Lou, Guannan</creatorcontrib><creatorcontrib>Kim, Miryung</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Deng, Yao</au><au>Zheng, Xi</au><au>Zhang, Tianyi</au><au>Chen, Chen</au><au>Lou, Guannan</au><au>Kim, Miryung</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models</atitle><btitle>2020 IEEE International Conference on Pervasive Computing and Communications (PerCom)</btitle><stitle>PerCom</stitle><date>2020-03</date><risdate>2020</risdate><spage>1</spage><epage>10</epage><pages>1-10</pages><eissn>2474-249X</eissn><eisbn>9781728146577</eisbn><eisbn>1728146577</eisbn><abstract>Nowadays, autonomous driving has attracted much attention from both industry and academia. Convolutional neural network (CNN) is a key component in autonomous driving, which is also increasingly adopted in pervasive computing such as smartphones, wearable devices, and IoT networks. Prior work shows CNN-based classification models are vulnerable to adversarial attacks. However, it is uncertain to what extent regression models such as driving models are vulnerable to adversarial attacks, the effectiveness of existing defense techniques, and the defense implications for system and middleware builders.This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models. Experiments show that, similar to classification models, these models are still highly vulnerable to adversarial attacks. This poses a big security threat to autonomous driving and thus should be taken into account in practice. While these defense methods can effectively defend against different attacks, none of them are able to provide adequate protection against all five attacks. We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve a good coverage of various attacks, (2) a black-box attack is much less effective compared with a white-box attack, implying that it is important to keep model details (e.g., model architecture, hyperparameters) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models.</abstract><pub>IEEE</pub><doi>10.1109/PerCom45495.2020.9127389</doi><tpages>10</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2474-249X
ispartof 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), 2020, p.1-10
issn 2474-249X
language eng
recordid cdi_ieee_primary_9127389
source IEEE Xplore All Conference Series
subjects adversarial attack
Analytical models
Autonomous driving
Computational modeling
Computer architecture
Convolutional neural networks
defense
Information retrieval
Security
Wearable computers
title An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T03%3A17%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=An%20Analysis%20of%20Adversarial%20Attacks%20and%20Defenses%20on%20Autonomous%20Driving%20Models&rft.btitle=2020%20IEEE%20International%20Conference%20on%20Pervasive%20Computing%20and%20Communications%20(PerCom)&rft.au=Deng,%20Yao&rft.date=2020-03&rft.spage=1&rft.epage=10&rft.pages=1-10&rft.eissn=2474-249X&rft_id=info:doi/10.1109/PerCom45495.2020.9127389&rft.eisbn=9781728146577&rft.eisbn_list=1728146577&rft_dat=%3Cieee_CHZPO%3E9127389%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c258t-40eba4b5559c33b6c8ce4beced587e93a7a0b526f10d1d311e01cc265fbfa84b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9127389&rfr_iscdi=true