Explaining AI-based Decision Support Systems using Concept Localization Maps

05/04/2020
by   Adriano Lucieri, et al.
18

Human-centric explainability of AI-based Decision Support Systems (DSS) using visual input modalities is directly related to reliability and practicality of such algorithms. An otherwise accurate and robust DSS might not enjoy trust of experts in critical application areas if it is not able to provide reasonable justification of its predictions. This paper introduces Concept Localization Maps (CLMs), which is a novel approach towards explainable image classifiers employed as DSS. CLMs extend Concept Activation Vectors (CAVs) by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier. They provide qualitative and quantitative assurance of a classifier's ability to learn and focus on similar concepts important for humans during image recognition. To better understand the effectiveness of the proposed method, we generated a new synthetic dataset called Simple Concept DataBase (SCDB) that includes annotations for 10 distinguishable concepts, and made it publicly available. We evaluated our proposed method on SCDB as well as a real-world dataset called CelebA. We achieved localization recall of above 80 using SE-ResNeXt-50 on SCDB. Our results on both datasets show great promise of CLMs for easing acceptance of DSS in practice.

READ FULL TEXT

page 4

page 7

page 9

research
01/04/2022

ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis of Skin Lesions

One principal impediment in the successful deployment of AI-based Comput...
research
11/29/2022

Understanding and Enhancing Robustness of Concept-based Models

Rising usage of deep neural networks to perform decision making in criti...
research
05/05/2020

On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors

Deep learning based medical image classifiers have shown remarkable prow...
research
12/18/2018

Interactive Naming for Explaining Deep Neural Networks: A Formative Study

We consider the problem of explaining the decisions of deep neural netwo...
research
02/05/2020

Concept Whitening for Interpretable Image Recognition

What does a neural network encode about a concept as we traverse through...
research
06/09/2022

ECLAD: Extracting Concepts with Local Aggregated Descriptors

Convolutional neural networks are being increasingly used in critical sy...
research
02/10/2020

Adversarial TCAV – Robust and Effective Interpretation of Intermediate Layers in Neural Networks

Interpreting neural network decisions and the information learned in int...

Please sign up or login with your details

Forgot password? Click here to reset