A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations

05/18/2018
by   Weili Nie, et al.
0

Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation(GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanation revealing that GBP and DeconvNet are essentially doing (partial) image recovery and thus are unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.

READ FULL TEXT

page 7

page 8

page 14

page 15

page 16

page 17

page 18

page 19

research
08/22/2019

Saliency Methods for Explaining Adversarial Attacks

In this work, we aim to explain the classifications of adversary images ...
research
11/29/2018

Deep learning for pedestrians: backpropagation in CNNs

The goal of this document is to provide a pedagogical introduction to th...
research
12/05/2018

Understanding Individual Decisions of CNNs via Contrastive Backpropagation

A number of backpropagation-based approaches such as DeConvNets, vanilla...
research
12/18/2017

Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks

Learning-based representations have become the defacto means to address ...
research
02/03/2020

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

Convolutional neural networks (CNNs) offer great machine learning perfor...
research
04/11/2021

Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation

We propose an enhancement technique of the Class Activation Mapping meth...
research
12/13/2022

Examining the Difference Among Transformers and CNNs with Explanation Methods

We propose a methodology that systematically applies deep explanation al...

Please sign up or login with your details

Forgot password? Click here to reset