From Heatmaps to Structural Explanations of Image Classifiers

by   Li Fuxin, et al.

This paper summarizes our endeavors in the past few years in terms of explaining image classifiers, with the aim of including negative results and insights we have gained. The paper starts with describing the explainable neural network (XNN), which attempts to extract and visualize several high-level concepts purely from the deep network, without relying on human linguistic concepts. This helps users understand network classifications that are less intuitive and substantially improves user performance on a difficult fine-grained classification task of discriminating among different species of seagulls. Realizing that an important missing piece is a reliable heatmap visualization tool, we have developed I-GOS and iGOS++ utilizing integrated gradients to avoid local optima in heatmap generation, which improved the performance across all resolutions. During the development of those visualizations, we realized that for a significant number of images, the classifier has multiple different paths to reach a confident prediction. This has lead to our recent development of structured attention graphs (SAGs), an approach that utilizes beam search to locate multiple coarse heatmaps for a single image, and compactly visualizes a set of heatmaps by capturing how different combinations of image regions impact the confidence of a classifier. Through the research process, we have learned much about insights in building deep network explanations, the existence and frequency of multiple explanations, and various tricks of the trade that make explanations work. In this paper, we attempt to share those insights and opinions with the readers with the hope that some of them will be informative for future researchers on explainable deep learning.


page 1

page 5

page 7

page 9


NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical Development Patterns of Preterm Infants

Deploying reliable deep learning techniques in interdisciplinary applica...

Combining Fine- and Coarse-Grained Classifiers for Diabetic Retinopathy Detection

Visual artefacts of early diabetic retinopathy in retinal fundus images ...

Saliency-driven Class Impressions for Feature Visualization of Deep Neural Networks

In this paper, we propose a data-free method of extracting Impressions o...

Structured Attention Graphs for Understanding Deep Image Classifications

Attention maps are a popular way of explaining the decisions of convolut...

Grad-CAM++ is Equivalent to Grad-CAM With Positive Gradients

The Grad-CAM algorithm provides a way to identify what parts of an image...

Mitigating Bias: Enhancing Image Classification by Improving Model Explanations

Deep learning models have demonstrated remarkable capabilities in learni...

SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition

Explainable artificial intelligence is gaining attention. However, most ...

Please sign up or login with your details

Forgot password? Click here to reset