Loading…

This looks More Like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation

•Detailed analysis of the shortcomings of the current state-of-the-art self-explaining model ProtoPNet.•A novel method improving the precision of prototype explanations: Prototypical Relevance Propagation.•Extensive qualitative and quantitative evaluation of the explanations regarding artifact detec...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2023-04, Vol.136, p.109172, Article 109172
Main Authors: Gautam, Srishti, Höhne, Marina M.-C., Hansen, Stine, Jenssen, Robert, Kampffmeyer, Michael
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Detailed analysis of the shortcomings of the current state-of-the-art self-explaining model ProtoPNet.•A novel method improving the precision of prototype explanations: Prototypical Relevance Propagation.•Extensive qualitative and quantitative evaluation of the explanations regarding artifact detection.•A multi view clustering approach to utilize PRP to detect and remove artifactual data. Current machine learning models have shown high efficiency in solving a wide variety of real-world problems. However, their black box character poses a major challenge for the comprehensibility and traceability of the underlying decision-making strategies. As a remedy, numerous post-hoc and self-explanation methods have been developed to interpret the models’ behavior. Those methods, in addition, enable the identification of artifacts that, inherent in the training data, can be erroneously learned by the model as class-relevant features. In this work, we provide a detailed case study of a representative for the state-of-the-art self-explaining network, ProtoPNet, in the presence of a spectrum of artifacts. Accordingly, we identify the main drawbacks of ProtoPNet, especially its coarse and spatially imprecise explanations. We address these limitations by introducing Prototypical Relevance Propagation (PRP), a novel method for generating more precise model-aware explanations. Furthermore, in order to obtain a clean, artifact-free dataset, we propose to use multi-view clustering strategies for segregating the artifact images using the PRP explanations, thereby suppressing the potential artifact learning in the models.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.109172