Learning Semantically Meaningful Features for Interpretable Classifications

Learning semantically meaningful features is important for Deep Neural Networks to win end-user trust. Attempts to generate post-hoc explanations fall short in gaining user confidence as they do not improve the interpretability of feature representations learned by the models. In this work, we propose Semantic Convolutional Neural Network (SemCNN) that has an additional Concept layer to learn the associations between visual features and word phrases. SemCNN employs an objective function that optimizes for both the prediction accuracy as well as the semantic meaningfulness of the learned feature representations. Further, SemCNN makes its decisions as a weighted sum of the contributions of these features leading to fully interpretable decisions. Experiment results on multiple benchmark datasets demonstrate that SemCNN can learn features with clear semantic meaning and their corresponding contributions to the model decision without compromising prediction accuracy. Furthermore, these learned concepts are transferrable and can be applied to new classes of objects that have similar concepts.

READ FULL TEXT

page 1

page 7

page 8

research
09/18/2020

Contextual Semantic Interpretability

Convolutional neural networks (CNN) are known to learn an image represen...
research
01/20/2020

An interpretable neural network model through piecewise linear approximation

Most existing interpretable methods explain a black-box model in a post-...
research
03/14/2023

Feature representations useful for predicting image memorability

Predicting image memorability has attracted interest in various fields. ...
research
04/27/2023

Interpretable Neural-Symbolic Concept Reasoning

Deep learning methods are highly accurate, yet their opaque decision pro...
research
08/08/2021

Context Matters: A Theory of Semantic Discriminability for Perceptual Encoding Systems

People's associations between colors and concepts influence their abilit...
research
04/27/2020

A Disentangling Invertible Interpretation Network for Explaining Latent Representations

Neural networks have greatly boosted performance in computer vision by l...
research
03/19/2023

Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations

An important line of research attempts to explain CNN image classifier p...

Please sign up or login with your details

Forgot password? Click here to reset