Loading…

A framework for falsifiable explanations of machine learning models with an application in computational pathology

In recent years, deep learning has been the key driver of breakthrough developments in computational pathology and other image based approaches that support medical diagnosis and treatment. The underlying neural networks as inherent black boxes lack transparency and are often accompanied by approach...

Full description

Saved in:
Bibliographic Details
Published in:Medical image analysis 2022-11, Vol.82, p.102594-102594, Article 102594
Main Authors: Schuhmacher, David, Schörner, Stephanie, Küpper, Claus, Großerueschkamp, Frederik, Sternemann, Carlo, Lugnier, Celine, Kraeft, Anna-Lena, Jütte, Hendrik, Tannapfel, Andrea, Reinacher-Schick, Anke, Gerwert, Klaus, Mosig, Axel
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, deep learning has been the key driver of breakthrough developments in computational pathology and other image based approaches that support medical diagnosis and treatment. The underlying neural networks as inherent black boxes lack transparency and are often accompanied by approaches to explain their output. However, formally defining explainability has been a notorious unsolved riddle. Here, we introduce a hypothesis-based framework for falsifiable explanations of machine learning models. A falsifiable explanation is a hypothesis that connects an intermediate space induced by the model with the sample from which the data originate. We instantiate this framework in a computational pathology setting using hyperspectral infrared microscopy. The intermediate space is an activation map, which is trained with an inductive bias to localize tumor. An explanation is constituted by hypothesizing that activation corresponds to tumor and associated structures, which we validate by histological staining as an independent secondary experiment. [Display omitted] •We define an explanation of a machine learning model as a falsifiable hypothesis.•The explaining hypothesis involves a variable inferred by the machine learning model.•The hypothesis refers to the sample from which the input data originate.•Our proposed CompSegNet uses the hypothesis as a tool to model inductive bias.•Our framework connects inductive machine learning with deductive reasoning.
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2022.102594