Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models

11/22/2022
by   Poulami Sinhamahapatra, et al.
0

Explaining black-box Artificial Intelligence (AI) models is a cornerstone for trustworthy AI and a prerequisite for its use in safety critical applications such that AI models can reliably assist humans in critical decisions. However, instead of trying to explain our models post-hoc, we need models which are interpretable-by-design built on a reasoning process similar to humans that exploits meaningful high-level concepts such as shapes, texture or object parts. Learning such concepts is often hindered by its need for explicit specification and annotation up front. Instead, prototype-based learning approaches such as ProtoPNet claim to discover visually meaningful prototypes in an unsupervised way. In this work, we propose a set of properties that those prototypes have to fulfill to enable human analysis, e.g. as part of a reliable model assessment case, and analyse such existing methods in the light of these properties. Given a 'Guess who?' game, we find that these prototypes still have a long way ahead towards definite explanations. We quantitatively validate our findings by conducting a user study indicating that many of the learnt prototypes are not considered useful towards human understanding. We discuss about the missing links in the existing methods and present a potential real-world application motivating the need to progress towards truly human-interpretable prototypes.

READ FULL TEXT

page 4

page 5

page 6

page 7

research
07/26/2022

Visual correspondence-based explanations improve AI robustness and human-AI team accuracy

Explaining artificial intelligence (AI) predictions is increasingly impo...
research
04/26/2023

GENIE-NF-AI: Identifying Neurofibromatosis Tumors using Liquid Neural Network (LTC) trained on AACR GENIE Datasets

In recent years, the field of medicine has been increasingly adopting ar...
research
11/05/2020

This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition

Image recognition with prototypes is considered an interpretable alterna...
research
09/29/2022

OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence for Digital Agriculture

Recent machine learning approaches have been effective in Artificial Int...
research
08/16/2020

Towards Faithful and Meaningful Interpretable Representations

Interpretable representations are the backbone of many black-box explain...
research
04/01/2022

Provable concept learning for interpretable predictions using variational inference

In safety critical applications, practitioners are reluctant to trust ne...
research
11/22/2020

A Bayesian Account of Measures of Interpretability in Human-AI Interaction

Existing approaches for the design of interpretable agent behavior consi...

Please sign up or login with your details

Forgot password? Click here to reset