Loading…
Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy
The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help get a better understanding of the decision-making process of black-box models. However, some recent works challenged sali...
Saved in:
Published in: | arXiv.org 2022-11 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Rizzo, Matteo Conati, Cristina Jang, Daesik Hu, Hui |
description | The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help get a better understanding of the decision-making process of black-box models. However, some recent works challenged saliency's faithfulness in the field of Natural Language Processing (NLP), questioning attention weights' adherence to the true decision-making process of the model. We add to this discussion by evaluating the faithfulness of in-model saliency applied to a video processing task for the first time, namely, temporal colour constancy. We perform the evaluation by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention fails to achieve faithfulness, while confidence, a particular type of in-model visual saliency, succeeds. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2736932285</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2736932285</sourcerecordid><originalsourceid>FETCH-proquest_journals_27369322853</originalsourceid><addsrcrecordid>eNqNjMsKwjAQAIMgWNR_WPBcqIn1cdYWD3qy97LarVZiUrOp6N8b0Q_wNIcZpiciqdQ0Xs6kHIgx8zVJEjlfyDRVkbhnD9Qd-sacwV8Icmz8pe60IWawNRxQN2ROr_iITBVkz1ajCbk1DLV1sCFqYUfozOewtxXpryjo1lqHGtZW284FGPYYTiPRr1EzjX8cikmeFett3Dp774h9eQ29CaqUCzVfKSmXqfqvegNI4ktq</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2736932285</pqid></control><display><type>article</type><title>Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy</title><source>Publicly Available Content (ProQuest)</source><creator>Rizzo, Matteo ; Conati, Cristina ; Jang, Daesik ; Hu, Hui</creator><creatorcontrib>Rizzo, Matteo ; Conati, Cristina ; Jang, Daesik ; Hu, Hui</creatorcontrib><description>The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help get a better understanding of the decision-making process of black-box models. However, some recent works challenged saliency's faithfulness in the field of Natural Language Processing (NLP), questioning attention weights' adherence to the true decision-making process of the model. We add to this discussion by evaluating the faithfulness of in-model saliency applied to a video processing task for the first time, namely, temporal colour constancy. We perform the evaluation by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention fails to achieve faithfulness, while confidence, a particular type of in-model visual saliency, succeeds.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Color ; Decision making ; Deep learning ; Image processing ; Mathematical models ; Natural language processing ; Salience ; Video</subject><ispartof>arXiv.org, 2022-11</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2736932285?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Rizzo, Matteo</creatorcontrib><creatorcontrib>Conati, Cristina</creatorcontrib><creatorcontrib>Jang, Daesik</creatorcontrib><creatorcontrib>Hu, Hui</creatorcontrib><title>Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy</title><title>arXiv.org</title><description>The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help get a better understanding of the decision-making process of black-box models. However, some recent works challenged saliency's faithfulness in the field of Natural Language Processing (NLP), questioning attention weights' adherence to the true decision-making process of the model. We add to this discussion by evaluating the faithfulness of in-model saliency applied to a video processing task for the first time, namely, temporal colour constancy. We perform the evaluation by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention fails to achieve faithfulness, while confidence, a particular type of in-model visual saliency, succeeds.</description><subject>Color</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Image processing</subject><subject>Mathematical models</subject><subject>Natural language processing</subject><subject>Salience</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjMsKwjAQAIMgWNR_WPBcqIn1cdYWD3qy97LarVZiUrOp6N8b0Q_wNIcZpiciqdQ0Xs6kHIgx8zVJEjlfyDRVkbhnD9Qd-sacwV8Icmz8pe60IWawNRxQN2ROr_iITBVkz1ajCbk1DLV1sCFqYUfozOewtxXpryjo1lqHGtZW284FGPYYTiPRr1EzjX8cikmeFett3Dp774h9eQ29CaqUCzVfKSmXqfqvegNI4ktq</recordid><startdate>20221115</startdate><enddate>20221115</enddate><creator>Rizzo, Matteo</creator><creator>Conati, Cristina</creator><creator>Jang, Daesik</creator><creator>Hu, Hui</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221115</creationdate><title>Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy</title><author>Rizzo, Matteo ; Conati, Cristina ; Jang, Daesik ; Hu, Hui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27369322853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Color</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Image processing</topic><topic>Mathematical models</topic><topic>Natural language processing</topic><topic>Salience</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Rizzo, Matteo</creatorcontrib><creatorcontrib>Conati, Cristina</creatorcontrib><creatorcontrib>Jang, Daesik</creatorcontrib><creatorcontrib>Hu, Hui</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rizzo, Matteo</au><au>Conati, Cristina</au><au>Jang, Daesik</au><au>Hu, Hui</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy</atitle><jtitle>arXiv.org</jtitle><date>2022-11-15</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The opacity of deep learning models constrains their debugging and improvement. Augmenting deep models with saliency-based strategies, such as attention, has been claimed to help get a better understanding of the decision-making process of black-box models. However, some recent works challenged saliency's faithfulness in the field of Natural Language Processing (NLP), questioning attention weights' adherence to the true decision-making process of the model. We add to this discussion by evaluating the faithfulness of in-model saliency applied to a video processing task for the first time, namely, temporal colour constancy. We perform the evaluation by adapting to our target task two tests for faithfulness from recent NLP literature, whose methodology we refine as part of our contributions. We show that attention fails to achieve faithfulness, while confidence, a particular type of in-model visual saliency, succeeds.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2736932285 |
source | Publicly Available Content (ProQuest) |
subjects | Color Decision making Deep learning Image processing Mathematical models Natural language processing Salience Video |
title | Evaluating the Faithfulness of Saliency-based Explanations for Deep Learning Models for Temporal Colour Constancy |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T17%3A23%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Evaluating%20the%20Faithfulness%20of%20Saliency-based%20Explanations%20for%20Deep%20Learning%20Models%20for%20Temporal%20Colour%20Constancy&rft.jtitle=arXiv.org&rft.au=Rizzo,%20Matteo&rft.date=2022-11-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2736932285%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27369322853%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2736932285&rft_id=info:pmid/&rfr_iscdi=true |