Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation

12/22/2017
by   Heyi Li, et al.
0

Despite the tremendous achievements of deep convolutional neural networks (CNNs) in most of computer vision tasks, understanding how they actually work remains a significant challenge. In this paper, we propose a novel two-step visualization method that aims to shed light on how deep CNNs recognize images and the objects therein. We start out with a layer-wise relevance propagation (LRP) step which estimates a pixel-wise relevance map over the input image. Following, we construct a context-aware saliency map from the LRP-generated map which predicts regions close to the foci of attention. We show that our algorithm clearly and concisely identifies the key pixels that contribute to the underlying neural network's comprehension of images. Experimental results using the ILSVRC2012 validation dataset in conjunction with two well-established deep CNNs demonstrate that combining the LRP with the visual salience estimation can give great insight into how a CNNs model perceives and understands a presented scene, in relation to what it has learned in the prior training phase.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset