NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks

08/29/2021
by   Haekyu Park, et al.
26

Existing research on making sense of deep neural networks often focuses on neuron-level interpretation, which may not adequately capture the bigger picture of how concepts are collectively encoded by multiple neurons. We present NeuroCartography, an interactive system that scalably summarizes and visualizes concepts learned by neural networks. It automatically discovers and groups neurons that detect the same concepts, and describes how such neuron groups interact to form higher-level concepts and the subsequent predictions. NeuroCartography introduces two scalable summarization techniques: (1) neuron clustering groups neurons based on the semantic similarity of the concepts detected by neurons (e.g., neurons detecting "dog faces" of different breeds are grouped); and (2) neuron embedding encodes the associations between related concepts based on how often they co-occur (e.g., neurons detecting "dog face" and "dog tail" are placed closer in the embedding space). Key to our scalable techniques is the ability to efficiently compute all neuron pairs' relationships, in time linear to the number of neurons instead of quadratic time. NeuroCartography scales to large data, such as the ImageNet dataset with 1.2M images. The system's tightly coordinated views integrate the scalable techniques to visualize the concepts and their relationships, projecting the concept associations to a 2D space in Neuron Projection View, and summarizing neuron clusters and their relationships in Graph View. Through a large-scale human evaluation, we demonstrate that our technique discovers neuron groups that represent coherent, human-meaningful concepts. And through usage scenarios, we describe how our approaches enable interesting and surprising discoveries, such as concept cascades of related and isolated concepts. The NeuroCartography visualization runs in modern browsers and is open-sourced.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

page 8

page 9

page 10

research
04/04/2019

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations

Deep learning is increasingly used in decision-making tasks. However, un...
research
03/27/2022

HINT: Hierarchical Neuron Concept Explainer

To interpret deep networks, one main approach is to associate neurons wi...
research
04/09/2019

A Feature-Value Network as a Brain Model

This paper suggests a statistical framework for describing the relations...
research
02/11/2016

Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks

We can better understand deep neural networks by identifying which featu...
research
08/03/2023

Wider and Deeper LLM Networks are Fairer LLM Evaluators

Measuring the quality of responses generated by LLMs is a challenging ta...
research
03/09/2023

Cones: Concept Neurons in Diffusion Models for Customized Generation

Human brains respond to semantic features of presented stimuli with diff...
research
12/18/2018

Interactive Naming for Explaining Deep Neural Networks: A Formative Study

We consider the problem of explaining the decisions of deep neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset