An Extension of LIME with Improvement of Interpretability and Fidelity

04/26/2020
by   Sheng Shi, et al.
0

While deep learning makes significant achievements in Artificial Intelligence (AI), the lack of transparency has limited its broad application in various vertical domains. Explainability is not only a gateway between AI and real world, but also a powerful feature to detect flaw of the models and bias of the data. Local Interpretable Model-agnostic Explanation (LIME) is a widely-accepted technique that explains the prediction of any classifier faithfully by learning an interpretable model locally around the predicted instance. As an extension of LIME, this paper proposes an high-interpretability and high-fidelity local explanation method, known as Local Explanation using feature Dependency Sampling and Nonlinear Approximation (LEDSNA). Given an instance being explained, LEDSNA enhances interpretability by feature sampling with intrinsic dependency. Besides, LEDSNA improves the local explanation fidelity by approximating nonlinear boundary of local decision. We evaluate our method with classification tasks in both image domain and text domain. Experiments show that LEDSNA's explanation of the back-box model achieves much better performance than original LIME in terms of interpretability and fidelity.

READ FULL TEXT

page 2

page 4

research
02/18/2020

A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation

Explainability is a gateway between Artificial Intelligence and society ...
research
11/04/2019

Explaining the Predictions of Any Image Classifier via Decision Trees

Despite outstanding contribution to the significant progress of Artifici...
research
09/04/2019

ALIME: Autoencoder Based Approach for Local Interpretability

Machine learning and especially deep learning have garneredtremendous po...
research
06/16/2021

Developing a Fidelity Evaluation Approach for Interpretable Machine Learning

Although modern machine learning and deep learning methods allow for com...
research
03/02/2021

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations

While the need for interpretable machine learning has been established, ...
research
06/24/2021

On Locality of Local Explanation Models

Shapley values provide model agnostic feature attributions for model out...
research
08/10/2023

FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis

Deep learning classifiers achieve state-of-the-art performance in variou...

Please sign up or login with your details

Forgot password? Click here to reset