Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

10/01/2020
by   Sam Sattarzadeh, et al.
0

As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class activation mapping and randomized input sampling have gained great popularity. However, the attribution methods based on these techniques provide lower resolution and blurry explanation maps that limit their explanation power. To circumvent this issue, visualization based on various layers is sought. In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation. We also propose a layer selection strategy that applies to the whole family of CNN-based models, based on which our extraction framework is applied to visualize the last layers of each convolutional block of the model. Moreover, we perform an empirical analysis of the efficacy of derived lower-level information to enhance the represented attributions. Comprehensive experiments conducted on shallow and deep models trained on natural and industrial datasets, using both ground-truth and model-truth based evaluation metrics validate our proposed algorithm by meeting or outperforming the state-of-the-art methods in terms of explanation ability and visual quality, demonstrating that our method shows stability regardless of the size of objects or instances to be explained.

READ FULL TEXT

page 1

page 6

page 12

page 13

page 14

page 15

page 16

page 17

research
03/16/2023

Fine-Grained and High-Faithfulness Explanations for Convolutional Neural Networks

Recently, explaining CNNs has become a research hotspot. CAM (Class Acti...
research
02/10/2021

LIFT-CAM: Towards Better Explanations for Class Activation Mapping

Increasing demands for understanding the internal behaviors of convoluti...
research
04/12/2021

A-FMI: Learning Attributions from Deep Networks via Feature Map Importance

Gradient-based attribution methods can aid in the understanding of convo...
research
05/30/2022

CHALLENGER: Training with Attribution Maps

We show that utilizing attribution maps for training neural networks can...
research
07/11/2023

Feature Activation Map: Visual Explanation of Deep Learning Models for Image Classification

Decisions made by convolutional neural networks(CNN) can be understood a...
research
07/29/2023

Towards the Visualization of Aggregated Class Activation Maps to Analyse the Global Contribution of Class Features

Deep learning (DL) models achieve remarkable performance in classificati...

Please sign up or login with your details

Forgot password? Click here to reset