Loading…
GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations
Deep neural networks tend to make overconfident predictions and often require additional detectors for misclassifications, particularly for safety-critical applications. Existing detection methods usually only focus on adversarial attacks or out-of-distribution samples as reasons for false predictio...
Saved in:
Published in: | arXiv.org 2023-07 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Lust, Julia Condurache, Alexandru P |
description | Deep neural networks tend to make overconfident predictions and often require additional detectors for misclassifications, particularly for safety-critical applications. Existing detection methods usually only focus on adversarial attacks or out-of-distribution samples as reasons for false predictions. However, generalization errors occur due to diverse reasons often related to poorly learning relevant invariances. We therefore propose GIT, a holistic approach for the detection of generalization errors that combines the usage of gradient information and invariance transformations. The invariance transformations are designed to shift misclassified samples back into the generalization area of the neural network, while the gradient information measures the contradiction between the initial prediction and the corresponding inherent computations of the neural network using the transformed sample. Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures, problem setups and perturbation types. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2834346932</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2834346932</sourcerecordid><originalsourceid>FETCH-proquest_journals_28343469323</originalsourceid><addsrcrecordid>eNqNjMuKwkAQRRthQHH8hwa3BjLd8bkT36ssjGspTUdaYkWrqgP-_RjxA1zdxT3ntFTHWPsXTRJj2qrHfI3j2IzGZji0HVVvdtlML524s3i86AOeHQl4lOdAp0GitIiWnoX8KYivUAPmep7XjhjIQ6n3cLuXjnXgRt8Q5N6h8JvbYd1Ar6TOCJCLim7QVPhX_RRQsut9tqv661W22EZ3qh7BsRyvVSB8XUczsYlNRlNr7HfUP5xjTSs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2834346932</pqid></control><display><type>article</type><title>GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Lust, Julia ; Condurache, Alexandru P</creator><creatorcontrib>Lust, Julia ; Condurache, Alexandru P</creatorcontrib><description>Deep neural networks tend to make overconfident predictions and often require additional detectors for misclassifications, particularly for safety-critical applications. Existing detection methods usually only focus on adversarial attacks or out-of-distribution samples as reasons for false predictions. However, generalization errors occur due to diverse reasons often related to poorly learning relevant invariances. We therefore propose GIT, a holistic approach for the detection of generalization errors that combines the usage of gradient information and invariance transformations. The invariance transformations are designed to shift misclassified samples back into the generalization area of the neural network, while the gradient information measures the contradiction between the initial prediction and the corresponding inherent computations of the neural network using the transformed sample. Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures, problem setups and perturbation types.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Computer architecture ; Errors ; Invariance ; Machine learning ; Perturbation ; Safety critical ; Transformations</subject><ispartof>arXiv.org, 2023-07</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2834346932?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Lust, Julia</creatorcontrib><creatorcontrib>Condurache, Alexandru P</creatorcontrib><title>GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations</title><title>arXiv.org</title><description>Deep neural networks tend to make overconfident predictions and often require additional detectors for misclassifications, particularly for safety-critical applications. Existing detection methods usually only focus on adversarial attacks or out-of-distribution samples as reasons for false predictions. However, generalization errors occur due to diverse reasons often related to poorly learning relevant invariances. We therefore propose GIT, a holistic approach for the detection of generalization errors that combines the usage of gradient information and invariance transformations. The invariance transformations are designed to shift misclassified samples back into the generalization area of the neural network, while the gradient information measures the contradiction between the initial prediction and the corresponding inherent computations of the neural network using the transformed sample. Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures, problem setups and perturbation types.</description><subject>Artificial neural networks</subject><subject>Computer architecture</subject><subject>Errors</subject><subject>Invariance</subject><subject>Machine learning</subject><subject>Perturbation</subject><subject>Safety critical</subject><subject>Transformations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjMuKwkAQRRthQHH8hwa3BjLd8bkT36ssjGspTUdaYkWrqgP-_RjxA1zdxT3ntFTHWPsXTRJj2qrHfI3j2IzGZji0HVVvdtlML524s3i86AOeHQl4lOdAp0GitIiWnoX8KYivUAPmep7XjhjIQ6n3cLuXjnXgRt8Q5N6h8JvbYd1Ar6TOCJCLim7QVPhX_RRQsut9tqv661W22EZ3qh7BsRyvVSB8XUczsYlNRlNr7HfUP5xjTSs</recordid><startdate>20230705</startdate><enddate>20230705</enddate><creator>Lust, Julia</creator><creator>Condurache, Alexandru P</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230705</creationdate><title>GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations</title><author>Lust, Julia ; Condurache, Alexandru P</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28343469323</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Computer architecture</topic><topic>Errors</topic><topic>Invariance</topic><topic>Machine learning</topic><topic>Perturbation</topic><topic>Safety critical</topic><topic>Transformations</topic><toplevel>online_resources</toplevel><creatorcontrib>Lust, Julia</creatorcontrib><creatorcontrib>Condurache, Alexandru P</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lust, Julia</au><au>Condurache, Alexandru P</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations</atitle><jtitle>arXiv.org</jtitle><date>2023-07-05</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Deep neural networks tend to make overconfident predictions and often require additional detectors for misclassifications, particularly for safety-critical applications. Existing detection methods usually only focus on adversarial attacks or out-of-distribution samples as reasons for false predictions. However, generalization errors occur due to diverse reasons often related to poorly learning relevant invariances. We therefore propose GIT, a holistic approach for the detection of generalization errors that combines the usage of gradient information and invariance transformations. The invariance transformations are designed to shift misclassified samples back into the generalization area of the neural network, while the gradient information measures the contradiction between the initial prediction and the corresponding inherent computations of the neural network using the transformed sample. Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures, problem setups and perturbation types.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2834346932 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Artificial neural networks Computer architecture Errors Invariance Machine learning Perturbation Safety critical Transformations |
title | GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T15%3A48%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=GIT:%20Detecting%20Uncertainty,%20Out-Of-Distribution%20and%20Adversarial%20Samples%20using%20Gradients%20and%20Invariance%20Transformations&rft.jtitle=arXiv.org&rft.au=Lust,%20Julia&rft.date=2023-07-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2834346932%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28343469323%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2834346932&rft_id=info:pmid/&rfr_iscdi=true |