Loading…

Iterative and Adaptive Sampling with Spatial Attention for Black-Box Model Explanations

Deep neural networks have achieved great success in many real-world applications, yet it remains unclear and difficult to explain their decision-making process to an enduser. In this paper, we address the explainable AI problem for deep neural networks with our proposed framework, named IASSA, which...

Full description

Saved in:
Bibliographic Details
Main Authors: Vasu, Bhavan, Long, Chengjiang
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep neural networks have achieved great success in many real-world applications, yet it remains unclear and difficult to explain their decision-making process to an enduser. In this paper, we address the explainable AI problem for deep neural networks with our proposed framework, named IASSA, which generates an importance map indicating how salient each pixel is for the models prediction with an iterative and adaptive sampling module. We employ an affinity matrix calculated on multi-level deep learning features to explore long-range pixel-to-pixel correlation, which can shift the saliency values guided by our long-range and parameter-free spatial attention module. Extensive experiments on the MS-COCO dataset show that the proposed approach matches or exceeds the performance of state-of-the-art black-box explanation methods. Our source code is available at https://github.com/vbhavank/IASSA-Saliency.
ISSN:2642-9381
DOI:10.1109/WACV45572.2020.9093576