Loading…
Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification
The reliability of neural networks is essential for their use in safety-critical applications. Existing approaches generally aim at improving the robustness of neural networks to either real-world distribution shifts (e.g., common corruptions and perturbations, spatial transformations, and natural a...
Saved in:
Published in: | arXiv.org 2022-05 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Schwinn, Leo Bungert, Leon Nguyen, An Raab, René Falk Pulsmeyer Precup, Doina Eskofier, Björn Zanca, Dario |
description | The reliability of neural networks is essential for their use in safety-critical applications. Existing approaches generally aim at improving the robustness of neural networks to either real-world distribution shifts (e.g., common corruptions and perturbations, spatial transformations, and natural adversarial examples) or worst-case distribution shifts (e.g., optimized adversarial examples). In this work, we propose the Decision Region Quantification (DRQ) algorithm to improve the robustness of any differentiable pre-trained model against both real-world and worst-case distribution shifts in the data. DRQ analyzes the robustness of local decision regions in the vicinity of a given data point to make more reliable predictions. We theoretically motivate the DRQ algorithm by showing that it effectively smooths spurious local extrema in the decision surface. Furthermore, we propose an implementation using targeted and untargeted adversarial attacks. An extensive empirical evaluation shows that DRQ increases the robustness of adversarially and non-adversarially trained models against real-world and worst-case distribution shifts on several computer vision benchmark datasets. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2667073589</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2667073589</sourcerecordid><originalsourceid>FETCH-proquest_journals_26670735893</originalsourceid><addsrcrecordid>eNqNjssKwjAQRYMgKOo_DLgu1MQ-XPtAl1bBpUSbtiM10czE77cFP8DVudx7FncgxlKpRZQvpRyJGdEjjmOZZjJJ1Fi8D8-Xdx-0NRTuFoitIQJda7TEUBjdRhfn2xK0LaFLxNFak4ENEnu8BUZn4dRgxQTceBfqBjbmjtT3hal7HIO2jBXedW9PxbDSLZnZjxMx323P633U_XgHQ3x9uOBtN11lmmZxppJ8pf6zvpu9TBE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2667073589</pqid></control><display><type>article</type><title>Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification</title><source>Publicly Available Content Database</source><creator>Schwinn, Leo ; Bungert, Leon ; Nguyen, An ; Raab, René ; Falk Pulsmeyer ; Precup, Doina ; Eskofier, Björn ; Zanca, Dario</creator><creatorcontrib>Schwinn, Leo ; Bungert, Leon ; Nguyen, An ; Raab, René ; Falk Pulsmeyer ; Precup, Doina ; Eskofier, Björn ; Zanca, Dario</creatorcontrib><description>The reliability of neural networks is essential for their use in safety-critical applications. Existing approaches generally aim at improving the robustness of neural networks to either real-world distribution shifts (e.g., common corruptions and perturbations, spatial transformations, and natural adversarial examples) or worst-case distribution shifts (e.g., optimized adversarial examples). In this work, we propose the Decision Region Quantification (DRQ) algorithm to improve the robustness of any differentiable pre-trained model against both real-world and worst-case distribution shifts in the data. DRQ analyzes the robustness of local decision regions in the vicinity of a given data point to make more reliable predictions. We theoretically motivate the DRQ algorithm by showing that it effectively smooths spurious local extrema in the decision surface. Furthermore, we propose an implementation using targeted and untargeted adversarial attacks. An extensive empirical evaluation shows that DRQ increases the robustness of adversarially and non-adversarially trained models against real-world and worst-case distribution shifts on several computer vision benchmark datasets.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer vision ; Data points ; Decision analysis ; Empirical analysis ; Network reliability ; Neural networks ; Perturbation ; Robustness ; Safety critical</subject><ispartof>arXiv.org, 2022-05</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2667073589?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Schwinn, Leo</creatorcontrib><creatorcontrib>Bungert, Leon</creatorcontrib><creatorcontrib>Nguyen, An</creatorcontrib><creatorcontrib>Raab, René</creatorcontrib><creatorcontrib>Falk Pulsmeyer</creatorcontrib><creatorcontrib>Precup, Doina</creatorcontrib><creatorcontrib>Eskofier, Björn</creatorcontrib><creatorcontrib>Zanca, Dario</creatorcontrib><title>Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification</title><title>arXiv.org</title><description>The reliability of neural networks is essential for their use in safety-critical applications. Existing approaches generally aim at improving the robustness of neural networks to either real-world distribution shifts (e.g., common corruptions and perturbations, spatial transformations, and natural adversarial examples) or worst-case distribution shifts (e.g., optimized adversarial examples). In this work, we propose the Decision Region Quantification (DRQ) algorithm to improve the robustness of any differentiable pre-trained model against both real-world and worst-case distribution shifts in the data. DRQ analyzes the robustness of local decision regions in the vicinity of a given data point to make more reliable predictions. We theoretically motivate the DRQ algorithm by showing that it effectively smooths spurious local extrema in the decision surface. Furthermore, we propose an implementation using targeted and untargeted adversarial attacks. An extensive empirical evaluation shows that DRQ increases the robustness of adversarially and non-adversarially trained models against real-world and worst-case distribution shifts on several computer vision benchmark datasets.</description><subject>Algorithms</subject><subject>Computer vision</subject><subject>Data points</subject><subject>Decision analysis</subject><subject>Empirical analysis</subject><subject>Network reliability</subject><subject>Neural networks</subject><subject>Perturbation</subject><subject>Robustness</subject><subject>Safety critical</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjssKwjAQRYMgKOo_DLgu1MQ-XPtAl1bBpUSbtiM10czE77cFP8DVudx7FncgxlKpRZQvpRyJGdEjjmOZZjJJ1Fi8D8-Xdx-0NRTuFoitIQJda7TEUBjdRhfn2xK0LaFLxNFak4ENEnu8BUZn4dRgxQTceBfqBjbmjtT3hal7HIO2jBXedW9PxbDSLZnZjxMx323P633U_XgHQ3x9uOBtN11lmmZxppJ8pf6zvpu9TBE</recordid><startdate>20220519</startdate><enddate>20220519</enddate><creator>Schwinn, Leo</creator><creator>Bungert, Leon</creator><creator>Nguyen, An</creator><creator>Raab, René</creator><creator>Falk Pulsmeyer</creator><creator>Precup, Doina</creator><creator>Eskofier, Björn</creator><creator>Zanca, Dario</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220519</creationdate><title>Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification</title><author>Schwinn, Leo ; Bungert, Leon ; Nguyen, An ; Raab, René ; Falk Pulsmeyer ; Precup, Doina ; Eskofier, Björn ; Zanca, Dario</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26670735893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Computer vision</topic><topic>Data points</topic><topic>Decision analysis</topic><topic>Empirical analysis</topic><topic>Network reliability</topic><topic>Neural networks</topic><topic>Perturbation</topic><topic>Robustness</topic><topic>Safety critical</topic><toplevel>online_resources</toplevel><creatorcontrib>Schwinn, Leo</creatorcontrib><creatorcontrib>Bungert, Leon</creatorcontrib><creatorcontrib>Nguyen, An</creatorcontrib><creatorcontrib>Raab, René</creatorcontrib><creatorcontrib>Falk Pulsmeyer</creatorcontrib><creatorcontrib>Precup, Doina</creatorcontrib><creatorcontrib>Eskofier, Björn</creatorcontrib><creatorcontrib>Zanca, Dario</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Schwinn, Leo</au><au>Bungert, Leon</au><au>Nguyen, An</au><au>Raab, René</au><au>Falk Pulsmeyer</au><au>Precup, Doina</au><au>Eskofier, Björn</au><au>Zanca, Dario</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification</atitle><jtitle>arXiv.org</jtitle><date>2022-05-19</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The reliability of neural networks is essential for their use in safety-critical applications. Existing approaches generally aim at improving the robustness of neural networks to either real-world distribution shifts (e.g., common corruptions and perturbations, spatial transformations, and natural adversarial examples) or worst-case distribution shifts (e.g., optimized adversarial examples). In this work, we propose the Decision Region Quantification (DRQ) algorithm to improve the robustness of any differentiable pre-trained model against both real-world and worst-case distribution shifts in the data. DRQ analyzes the robustness of local decision regions in the vicinity of a given data point to make more reliable predictions. We theoretically motivate the DRQ algorithm by showing that it effectively smooths spurious local extrema in the decision surface. Furthermore, we propose an implementation using targeted and untargeted adversarial attacks. An extensive empirical evaluation shows that DRQ increases the robustness of adversarially and non-adversarially trained models against real-world and worst-case distribution shifts on several computer vision benchmark datasets.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2667073589 |
source | Publicly Available Content Database |
subjects | Algorithms Computer vision Data points Decision analysis Empirical analysis Network reliability Neural networks Perturbation Robustness Safety critical |
title | Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T18%3A57%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Improving%20Robustness%20against%20Real-World%20and%20Worst-Case%20Distribution%20Shifts%20through%20Decision%20Region%20Quantification&rft.jtitle=arXiv.org&rft.au=Schwinn,%20Leo&rft.date=2022-05-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2667073589%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26670735893%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2667073589&rft_id=info:pmid/&rfr_iscdi=true |