Loading…
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations
Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications, such as medicine. The field of explainable AI (XAI) has proposed various methods to comprehend the decision-making processes of opaque DNNs. However, only few XAI methods are suita...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 3501 |
container_issue | |
container_start_page | 3491 |
container_title | |
container_volume | |
creator | Dreyer, Maximilian Achtibat, Reduan Samek, Wojciech Lapuschkin, Sebastian |
description | Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications, such as medicine. The field of explainable AI (XAI) has proposed various methods to comprehend the decision-making processes of opaque DNNs. However, only few XAI methods are suitable of ensuring safety in practice as they heavily rely on repeated labor-intensive and possibly biased human assessment. In this work, we present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes. What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment. Quantifying the deviation from prototypical behavior not only allows to associate predictions with specific model sub-strategies but also to detect outlier behavior. As such, our approach constitutes an intuitive and explainable tool for model validation. We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet, and EfficientNet architectures. Code is available at https://github.com/maxdreyer/pcx. |
doi_str_mv | 10.1109/CVPRW63382.2024.00353 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10678010</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10678010</ieee_id><sourcerecordid>10678010</sourcerecordid><originalsourceid>FETCH-LOGICAL-i683-5c41549ec98e7ff0fc7b86eaf7f23ac26944ff9489453d92ca99aa4d6e4b6dc83</originalsourceid><addsrcrecordid>eNotj0tLAzEYRaMgWGr_gUKWupia9yTuZKwPqLRIrcuSJl9sZJwZJoG2_94RXd3L5XDgInRFyZRSYm6r9fLtQ3Gu2ZQRJqaEcMlP0MSURnNJuJKiFKdoxKgiRSmpOkeTlL4IIZRoKQ0foe698dCnbBsfm0-cd4CvZ4fc2-Jm0Q-T7Y93eG3r6G3-BR4AOvzaeqiH6mKKbZPwPuYdXvZtbvOxi87WuGobB10utjaBx7NDV9tmEAzwBToLtk4w-c8xWj3OVtVzMV88vVT38yIqzQvpBJXCgDMayhBIcOVWK7ChDIxbx5QRIgQjtBGSe8OcNcZa4RWIrfJO8zG6_NNGANh0ffwenmwoUaUezvMf3EVdXg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations</title><source>IEEE Xplore All Conference Series</source><creator>Dreyer, Maximilian ; Achtibat, Reduan ; Samek, Wojciech ; Lapuschkin, Sebastian</creator><creatorcontrib>Dreyer, Maximilian ; Achtibat, Reduan ; Samek, Wojciech ; Lapuschkin, Sebastian</creatorcontrib><description>Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications, such as medicine. The field of explainable AI (XAI) has proposed various methods to comprehend the decision-making processes of opaque DNNs. However, only few XAI methods are suitable of ensuring safety in practice as they heavily rely on repeated labor-intensive and possibly biased human assessment. In this work, we present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes. What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment. Quantifying the deviation from prototypical behavior not only allows to associate predictions with specific model sub-strategies but also to detect outlier behavior. As such, our approach constitutes an intuitive and explainable tool for model validation. We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet, and EfficientNet architectures. Code is available at https://github.com/maxdreyer/pcx.</description><identifier>EISSN: 2160-7516</identifier><identifier>EISBN: 9798350365474</identifier><identifier>DOI: 10.1109/CVPRW63382.2024.00353</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>AI safety ; concept-based XAI ; Conferences ; Data integrity ; Decision making ; Explainable AI ; outlier detection ; Predictive models ; Prototypes ; Training data</subject><ispartof>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024, p.3491-3501</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10678010$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10678010$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Dreyer, Maximilian</creatorcontrib><creatorcontrib>Achtibat, Reduan</creatorcontrib><creatorcontrib>Samek, Wojciech</creatorcontrib><creatorcontrib>Lapuschkin, Sebastian</creatorcontrib><title>Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations</title><title>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</title><addtitle>CVPRW</addtitle><description>Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications, such as medicine. The field of explainable AI (XAI) has proposed various methods to comprehend the decision-making processes of opaque DNNs. However, only few XAI methods are suitable of ensuring safety in practice as they heavily rely on repeated labor-intensive and possibly biased human assessment. In this work, we present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes. What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment. Quantifying the deviation from prototypical behavior not only allows to associate predictions with specific model sub-strategies but also to detect outlier behavior. As such, our approach constitutes an intuitive and explainable tool for model validation. We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet, and EfficientNet architectures. Code is available at https://github.com/maxdreyer/pcx.</description><subject>AI safety</subject><subject>concept-based XAI</subject><subject>Conferences</subject><subject>Data integrity</subject><subject>Decision making</subject><subject>Explainable AI</subject><subject>outlier detection</subject><subject>Predictive models</subject><subject>Prototypes</subject><subject>Training data</subject><issn>2160-7516</issn><isbn>9798350365474</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj0tLAzEYRaMgWGr_gUKWupia9yTuZKwPqLRIrcuSJl9sZJwZJoG2_94RXd3L5XDgInRFyZRSYm6r9fLtQ3Gu2ZQRJqaEcMlP0MSURnNJuJKiFKdoxKgiRSmpOkeTlL4IIZRoKQ0foe698dCnbBsfm0-cd4CvZ4fc2-Jm0Q-T7Y93eG3r6G3-BR4AOvzaeqiH6mKKbZPwPuYdXvZtbvOxi87WuGobB10utjaBx7NDV9tmEAzwBToLtk4w-c8xWj3OVtVzMV88vVT38yIqzQvpBJXCgDMayhBIcOVWK7ChDIxbx5QRIgQjtBGSe8OcNcZa4RWIrfJO8zG6_NNGANh0ffwenmwoUaUezvMf3EVdXg</recordid><startdate>20240617</startdate><enddate>20240617</enddate><creator>Dreyer, Maximilian</creator><creator>Achtibat, Reduan</creator><creator>Samek, Wojciech</creator><creator>Lapuschkin, Sebastian</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20240617</creationdate><title>Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations</title><author>Dreyer, Maximilian ; Achtibat, Reduan ; Samek, Wojciech ; Lapuschkin, Sebastian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i683-5c41549ec98e7ff0fc7b86eaf7f23ac26944ff9489453d92ca99aa4d6e4b6dc83</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>AI safety</topic><topic>concept-based XAI</topic><topic>Conferences</topic><topic>Data integrity</topic><topic>Decision making</topic><topic>Explainable AI</topic><topic>outlier detection</topic><topic>Predictive models</topic><topic>Prototypes</topic><topic>Training data</topic><toplevel>online_resources</toplevel><creatorcontrib>Dreyer, Maximilian</creatorcontrib><creatorcontrib>Achtibat, Reduan</creatorcontrib><creatorcontrib>Samek, Wojciech</creatorcontrib><creatorcontrib>Lapuschkin, Sebastian</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dreyer, Maximilian</au><au>Achtibat, Reduan</au><au>Samek, Wojciech</au><au>Lapuschkin, Sebastian</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations</atitle><btitle>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</btitle><stitle>CVPRW</stitle><date>2024-06-17</date><risdate>2024</risdate><spage>3491</spage><epage>3501</epage><pages>3491-3501</pages><eissn>2160-7516</eissn><eisbn>9798350365474</eisbn><coden>IEEPAD</coden><abstract>Ensuring both transparency and safety is critical when deploying Deep Neural Networks (DNNs) in high-risk applications, such as medicine. The field of explainable AI (XAI) has proposed various methods to comprehend the decision-making processes of opaque DNNs. However, only few XAI methods are suitable of ensuring safety in practice as they heavily rely on repeated labor-intensive and possibly biased human assessment. In this work, we present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes. What sets our approach apart is the combination of local and global strategies, enabling a clearer understanding of the (dis-)similarities in model decisions compared to the expected (prototypical) concept use, ultimately reducing the dependence on human long-term assessment. Quantifying the deviation from prototypical behavior not only allows to associate predictions with specific model sub-strategies but also to detect outlier behavior. As such, our approach constitutes an intuitive and explainable tool for model validation. We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets (ImageNet, CUB-200, and CIFAR-10) utilizing VGG, ResNet, and EfficientNet architectures. Code is available at https://github.com/maxdreyer/pcx.</abstract><pub>IEEE</pub><doi>10.1109/CVPRW63382.2024.00353</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2160-7516 |
ispartof | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024, p.3491-3501 |
issn | 2160-7516 |
language | eng |
recordid | cdi_ieee_primary_10678010 |
source | IEEE Xplore All Conference Series |
subjects | AI safety concept-based XAI Conferences Data integrity Decision making Explainable AI outlier detection Predictive models Prototypes Training data |
title | Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T13%3A29%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Understanding%20the%20(Extra-)Ordinary:%20Validating%20Deep%20Model%20Decisions%20with%20Prototypical%20Concept-based%20Explanations&rft.btitle=2024%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20Workshops%20(CVPRW)&rft.au=Dreyer,%20Maximilian&rft.date=2024-06-17&rft.spage=3491&rft.epage=3501&rft.pages=3491-3501&rft.eissn=2160-7516&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPRW63382.2024.00353&rft.eisbn=9798350365474&rft_dat=%3Cieee_CHZPO%3E10678010%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i683-5c41549ec98e7ff0fc7b86eaf7f23ac26944ff9489453d92ca99aa4d6e4b6dc83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10678010&rfr_iscdi=true |