Convolutional Neural Network Interpretability with General Pattern Theory

02/05/2021
by   Erico Tjoa, et al.
6

Ongoing efforts to understand deep neural networks (DNN) have provided many insights, but DNNs remain incompletely understood. Improving DNN's interpretability has practical benefits, such as more accountable usage, better algorithm maintenance and improvement. The complexity of dataset structure may contribute to the difficulty in solving interpretability problem arising from DNN's black-box mechanism. Thus, we propose to use pattern theory formulated by Ulf Grenander, in which data can be described as configurations of fundamental objects that allow us to investigate convolutional neural network's (CNN) interpretability in a component-wise manner. Specifically, U-Net-like structure is formed by attaching expansion blocks (EB) to ResNet, allowing it to perform semantic segmentation-like tasks at its EB output channels designed to be compatible with pattern theory's configurations. Through these modules, some heatmap-based explainable artificial intelligence (XAI) methods will be shown to extract explanations w.r.t individual generators that make up a single data sample, potentially reducing the impact of dataset's complexity to interpretability problem. The MNIST-equivalent dataset containing pattern theory's elements is designed to facilitate smoother entry into this framework, along which the theory's generative aspect is naturally presented.

READ FULL TEXT

page 2

page 5

page 8

page 12

page 18

page 19

research
11/19/2019

Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

When artificial intelligence is used in the medical sector, interpretabi...
research
04/04/2021

A Modified Convolutional Network for Auto-encoding based on Pattern Theory Growth Function

This brief paper reports the shortcoming of a variant of convolutional n...
research
09/07/2022

Explainable Artificial Intelligence to Detect Image Spam Using Convolutional Neural Network

Image spam threat detection has continually been a popular area of resea...
research
10/09/2022

A Detailed Study of Interpretability of Deep Neural Network based Top Taggers

Recent developments in the methods of explainable AI (xAI) methods allow...
research
07/13/2021

Scalable, Axiomatic Explanations of Deep Alzheimer's Diagnosis from Heterogeneous Data

Deep Neural Networks (DNNs) have an enormous potential to learn from com...
research
04/16/2022

Semantic interpretation for convolutional neural networks: What makes a cat a cat?

The interpretability of deep neural networks has attracted increasing at...
research
09/18/2020

X-DC: Explainable Deep Clustering based on Learnable Spectrogram Templates

Deep neural networks (DNNs) have achieved substantial predictive perform...

Please sign up or login with your details

Forgot password? Click here to reset