DeepAI AI Chat
Log In Sign Up

TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts

by   Zhihao Wang, et al.
Beijing University of Posts and Telecommunications
NetEase, Inc

Explaining deep learning models is of vital importance for understanding artificial intelligence systems, improving safety, and evaluating fairness. To better understand and control the CNN model, many methods for transparency-interpretability have been proposed. However, most of these works are less intuitive for human understanding and have insufficient human control over the CNN model. We propose a novel method, Transparent and Controllable Network Learning (TCNL), to overcome such challenges. Towards the goal of improving transparency-interpretability, in TCNL, we define some concepts for specific classification tasks through scientific human-intuition study and incorporate concept information into the CNN model. In TCNL, the shallow feature extractor gets preliminary features first. Then several concept feature extractors are built right after the shallow feature extractor to learn high-dimensional concept representations. The concept feature extractor is encouraged to encode information related to the predefined concepts. We also build the concept mapper to visualize features extracted by the concept extractor in a human-intuitive way. TCNL provides a generalizable approach to transparency-interpretability. Researchers can define concepts corresponding to certain classification tasks and encourage the model to encode specific concept information, which to a certain extent improves transparency-interpretability and the controllability of the CNN model. The datasets (with concept sets) for our experiments will also be released (


page 3

page 7

page 8

page 9


PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs

Deep CNNs, though have achieved the state of the art performance in imag...

Learning Interpretable Concept-Based Models with Human Feedback

Machine learning models that first learn a representation of a domain in...

MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks

Deep convolutional networks have been quite successful at various image ...

Learning Bottleneck Concepts in Image Classification

Interpreting and explaining the behavior of deep neural networks is crit...

Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models

Evaluating, explaining, and visualizing high-level concepts in generativ...

A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

Despite substantial progress in applying neural networks (NN) to a wide ...

Model Transparency and Interpretability : Survey and Application to the Insurance Industry

The use of models, even if efficient, must be accompanied by an understa...