Loading…

Scientific Discovery by Generating Counterfactuals using Image Translation

Model explanation techniques play a critical role in understanding the source of a model's performance and making its decisions transparent. Here we investigate if explanation techniques can also be used as a mechanism for scientific discovery. We make three contributions: first, we propose a f...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-07
Main Authors: Arunachalam Narayanaswamy, Venugopalan, Subhashini, Webster, Dale R, Peng, Lily, Corrado, Greg, Ruamviboonsuk, Paisan, Pinal Bavishi, Sayres, Rory, Huang, Abigail, Balasubramanian, Siva, Brenner, Michael, Nelson, Philip, Varadarajan, Avinash V
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Model explanation techniques play a critical role in understanding the source of a model's performance and making its decisions transparent. Here we investigate if explanation techniques can also be used as a mechanism for scientific discovery. We make three contributions: first, we propose a framework to convert predictions from explanation techniques to a mechanism of discovery. Second, we show how generative models in combination with black-box predictors can be used to generate hypotheses (without human priors) that can be critically examined. Third, with these techniques we study classification models for retinal images predicting Diabetic Macular Edema (DME), where recent work showed that a CNN trained on these images is likely learning novel features in the image. We demonstrate that the proposed framework is able to explain the underlying scientific mechanism, thus bridging the gap between the model's performance and human understanding.
ISSN:2331-8422