How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels

08/26/2020
by   Hua Shen, et al.
0

Explaining to users why automated systems make certain mistakes is important and challenging. Researchers have proposed ways to automatically produce interpretations for deep neural network models. However, it is unclear how useful these interpretations are in helping users figure out why they are getting an error. If an interpretation effectively explains to users how the underlying deep neural network model works, people who were presented with the interpretation should be better at predicting the model's outputs than those who were not. This paper presents an investigation on whether or not showing machine-generated visual interpretations helps users understand the incorrectly predicted labels produced by image classifiers. We showed the images and the correct labels to 150 online crowd workers and asked them to select the incorrectly predicted labels with or without showing them the machine-generated visual interpretations. The results demonstrated that displaying the visual interpretations did not increase, but rather decreased, the average guessing accuracy by roughly 10

READ FULL TEXT

page 2

page 3

research
03/27/2021

Explaining the Road Not Taken

It is unclear if existing interpretations of deep neural network models ...
research
03/19/2021

Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond

Deep neural networks have been well-known for their superb performance i...
research
09/10/2019

NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks

The problem of explaining deep learning models, and model predictions ge...
research
08/16/2021

Synthesizing Pareto-Optimal Interpretations for Black-Box Models

We present a new multi-objective optimization approach for synthesizing ...
research
01/12/2019

Automatic classification of geologic units in seismic images using partially interpreted examples

Geologic interpretation of large seismic stacked or migrated seismic ima...
research
04/01/2022

Extracting Rules from Neural Networks with Partial Interpretations

We investigate the problem of extracting rules, expressed in Horn logic,...
research
09/12/2021

The Logic Traps in Evaluating Post-hoc Interpretations

Post-hoc interpretation aims to explain a trained model and reveal how t...

Please sign up or login with your details

Forgot password? Click here to reset