What does LIME really see in images?

02/11/2021
by   Damien Garreau, et al.
0

The performance of modern algorithms on certain computer vision tasks such as object recognition is now close to that of humans. This success was achieved at the price of complicated architectures depending on millions of parameters and it has become quite challenging to understand how particular predictions are made. Interpretability methods propose to give us this understanding. In this paper, we study LIME, perhaps one of the most popular. On the theoretical side, we show that when the number of generated examples is large, LIME explanations are concentrated around a limit explanation for which we give an explicit expression. We further this study for elementary shape detectors and linear models. As a consequence of this analysis, we uncover a connection between LIME and integrated gradients, another explanation method. More precisely, the LIME explanations are similar to the sum of integrated gradients over the superpixels used in the preprocessing step of LIME.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

page 24

page 25

page 26

research
01/30/2018

The Intriguing Properties of Model Explanations

Linear approximations to the decision boundary of a complex model have b...
research
05/16/2017

Learning how to explain neural networks: PatternNet and PatternAttribution

DeConvNet, Guided BackProp, LRP, were invented to better understand deep...
research
08/12/2022

Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights

There are many different methods in the literature for local explanation...
research
04/03/2020

Attribution in Scale and Space

We study the attribution problem [28] for deep networks applied to perce...
research
06/15/2022

The Manifold Hypothesis for Gradient-Based Explanations

When do gradient-based explanation algorithms provide meaningful explana...
research
08/25/2020

Looking deeper into LIME

Interpretability of machine learning algorithm is a pressing need. Numer...
research
03/24/2023

IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients

Integrated Gradients (IG) as well as its variants are well-known techniq...

Please sign up or login with your details

Forgot password? Click here to reset