Loading…

Explaining clinical decision support systems in medical imaging using cycle-consistent activation maximization

•A novel approach combining CycleGANs and Activation Maximization is demonstrated to be applicable for the task of classifier decision explanation.•The method is demonstrated to work well on medical imaging data with the example of the well-known and publicly available LIDC-IDRI dataset and the MedM...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2021-10, Vol.458, p.141-156
Main Authors: Katzmann, Alexander, Taubmann, Oliver, Ahmad, Stephen, Mühlberg, Alexander, Sühling, Michael, Groß, Horst-Michael
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A novel approach combining CycleGANs and Activation Maximization is demonstrated to be applicable for the task of classifier decision explanation.•The method is demonstrated to work well on medical imaging data with the example of the well-known and publicly available LIDC-IDRI dataset and the MedMNIST breast cancer ultrasound image collection.•The generated explanations clearly outperformed multiple popular approaches for classifier decision explanation in a thorough user study. Clinical decision support using deep neural networks has become a topic of steadily growing interest. While recent work has repeatedly demonstrated that deep learning offers major advantages for medical image classification over traditional methods, clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend. In recent years, this has been addressed by a variety of approaches that have successfully contributed to providing deeper insight. Most notably, additive feature attribution methods are able to propagate decisions back into the input space by creating a saliency map which allows the practitioner to “see what the network sees.” However, the quality of the generated maps can become poor and the images noisy if only limited data is available—a typical scenario in clinical contexts. We propose a novel decision explanation scheme based on CycleGAN activation maximization which generates high-quality visualizations of classifier decisions even in smaller data sets. We conducted a user study in which we evaluated our method on the LIDC dataset for lung lesion malignancy classification, the BreastMNIST dataset for ultrasound image breast cancer detection, as well as two subsets of the CIFAR-10 dataset for RBG image object recognition. Within this user study, our method clearly outperformed existing approaches on the medical imaging datasets and ranked second in the natural image setting. With our approach we make a significant contribution towards a better understanding of clinical decision support systems based on deep neural networks and thus aim to foster overall clinical acceptance.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2021.05.081