Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks

03/07/2022
by   Nicola Garau, et al.
0

Deep neural networks achieve outstanding results in a large variety of tasks, often outperforming human experts. However, a known limitation of current neural architectures is the poor accessibility to understand and interpret the network response to a given input. This is directly related to the huge number of variables and the associated non-linearities of neural models, which are often used as black boxes. When it comes to critical applications as autonomous driving, security and safety, medicine and health, the lack of interpretability of the network behavior tends to induce skepticism and limited trustworthiness, despite the accurate performance of such systems in the given task. Furthermore, a single metric, such as the classification accuracy, provides a non-exhaustive evaluation of most real-world scenarios. In this paper, we want to make a step forward towards interpretability in neural networks, providing new tools to interpret their behavior. We present Agglomerator, a framework capable of providing a representation of part-whole hierarchies from visual cues and organizing the input distribution matching the conceptual-semantic hierarchical structure between classes. We evaluate our method on common datasets, such as SmallNORB, MNIST, FashionMNIST, CIFAR-10, and CIFAR-100, providing a more interpretable model than other state-of-the-art approaches.

READ FULL TEXT

page 1

page 7

research
11/15/2017

Interpreting Deep Visual Representations via Network Dissection

The success of recent deep convolutional neural networks (CNNs) depends ...
research
10/11/2021

Reason induced visual attention for explainable autonomous driving

Deep learning (DL) based computer vision (CV) models are generally consi...
research
01/27/2023

Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean

Deep neural networks (DNNs) have proven to be highly effective in a vari...
research
11/16/2022

Interpretable Self-Aware Neural Networks for Robust Trajectory Prediction

Although neural networks have seen tremendous success as predictive mode...
research
10/11/2020

Interpretable Neural Networks for Panel Data Analysis in Economics

The lack of interpretability and transparency are preventing economists ...
research
07/31/2022

Learning an Interpretable Model for Driver Behavior Prediction with Inductive Biases

To plan safe maneuvers and act with foresight, autonomous vehicles must ...
research
07/11/2023

Scale Alone Does not Improve Mechanistic Interpretability in Vision Models

In light of the recent widespread adoption of AI systems, understanding ...

Please sign up or login with your details

Forgot password? Click here to reset