Loading…

Robust Trajectory Prediction against Adversarial Attacks

Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory p...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-07
Main Authors: Cao, Yulong, Xu, Danfei, Weng, Xinshuo, Mao, Zhuoqing, Anandkumar, Anima, Xiao, Chaowei, Pavone, Marco
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Cao, Yulong
Xu, Danfei
Weng, Xinshuo
Mao, Zhuoqing
Anandkumar, Anima
Xiao, Chaowei
Pavone, Marco
description Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2697203475</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2697203475</sourcerecordid><originalsourceid>FETCH-proquest_journals_26972034753</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwCMpPKi0uUQgpSsxKTS7JL6pUCChKTclMLsnMz1NITE_MzAPKOqaUpRYVJxZlJuYoOJaUJCZnF_MwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRmaW5kYGxibmpMXGqAH7sNo4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2697203475</pqid></control><display><type>article</type><title>Robust Trajectory Prediction against Adversarial Attacks</title><source>ProQuest - Publicly Available Content Database</source><creator>Cao, Yulong ; Xu, Danfei ; Weng, Xinshuo ; Mao, Zhuoqing ; Anandkumar, Anima ; Xiao, Chaowei ; Pavone, Marco</creator><creatorcontrib>Cao, Yulong ; Xu, Danfei ; Weng, Xinshuo ; Mao, Zhuoqing ; Anandkumar, Anima ; Xiao, Chaowei ; Pavone, Marco</creatorcontrib><description>Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Collisions ; Performance degradation ; Performance enhancement ; Prediction models ; Robustness</subject><ispartof>arXiv.org, 2022-07</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2697203475?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>778,782,25736,36995,44573</link.rule.ids></links><search><creatorcontrib>Cao, Yulong</creatorcontrib><creatorcontrib>Xu, Danfei</creatorcontrib><creatorcontrib>Weng, Xinshuo</creatorcontrib><creatorcontrib>Mao, Zhuoqing</creatorcontrib><creatorcontrib>Anandkumar, Anima</creatorcontrib><creatorcontrib>Xiao, Chaowei</creatorcontrib><creatorcontrib>Pavone, Marco</creatorcontrib><title>Robust Trajectory Prediction against Adversarial Attacks</title><title>arXiv.org</title><description>Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).</description><subject>Artificial neural networks</subject><subject>Collisions</subject><subject>Performance degradation</subject><subject>Performance enhancement</subject><subject>Prediction models</subject><subject>Robustness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwCMpPKi0uUQgpSsxKTS7JL6pUCChKTclMLsnMz1NITE_MzAPKOqaUpRYVJxZlJuYoOJaUJCZnF_MwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRmaW5kYGxibmpMXGqAH7sNo4</recordid><startdate>20220729</startdate><enddate>20220729</enddate><creator>Cao, Yulong</creator><creator>Xu, Danfei</creator><creator>Weng, Xinshuo</creator><creator>Mao, Zhuoqing</creator><creator>Anandkumar, Anima</creator><creator>Xiao, Chaowei</creator><creator>Pavone, Marco</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220729</creationdate><title>Robust Trajectory Prediction against Adversarial Attacks</title><author>Cao, Yulong ; Xu, Danfei ; Weng, Xinshuo ; Mao, Zhuoqing ; Anandkumar, Anima ; Xiao, Chaowei ; Pavone, Marco</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26972034753</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Collisions</topic><topic>Performance degradation</topic><topic>Performance enhancement</topic><topic>Prediction models</topic><topic>Robustness</topic><toplevel>online_resources</toplevel><creatorcontrib>Cao, Yulong</creatorcontrib><creatorcontrib>Xu, Danfei</creatorcontrib><creatorcontrib>Weng, Xinshuo</creatorcontrib><creatorcontrib>Mao, Zhuoqing</creatorcontrib><creatorcontrib>Anandkumar, Anima</creatorcontrib><creatorcontrib>Xiao, Chaowei</creatorcontrib><creatorcontrib>Pavone, Marco</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cao, Yulong</au><au>Xu, Danfei</au><au>Weng, Xinshuo</au><au>Mao, Zhuoqing</au><au>Anandkumar, Anima</au><au>Xiao, Chaowei</au><au>Pavone, Marco</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Robust Trajectory Prediction against Adversarial Attacks</atitle><jtitle>arXiv.org</jtitle><date>2022-07-29</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_2697203475
source ProQuest - Publicly Available Content Database
subjects Artificial neural networks
Collisions
Performance degradation
Performance enhancement
Prediction models
Robustness
title Robust Trajectory Prediction against Adversarial Attacks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T15%3A13%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Robust%20Trajectory%20Prediction%20against%20Adversarial%20Attacks&rft.jtitle=arXiv.org&rft.au=Cao,%20Yulong&rft.date=2022-07-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2697203475%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26972034753%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2697203475&rft_id=info:pmid/&rfr_iscdi=true