Model interpretation using improved local regression with variable importance

09/12/2022
by   Gilson Y. Shimizu, et al.
0

A fundamental question on the use of ML models concerns the explanation of their predictions for increasing transparency in decision-making. Although several interpretability methods have emerged, some gaps regarding the reliability of their explanations have been identified. For instance, most methods are unstable (meaning that they give very different explanations with small changes in the data), and do not cope well with irrelevant features (that is, features not related to the label). This article introduces two new interpretability methods, namely VarImp and SupClus, that overcome these issues by using local regressions fits with a weighted distance that takes into account variable importance. Whereas VarImp generates explanations for each instance and can be applied to datasets with more complex relationships, SupClus interprets clusters of instances with similar explanations and can be applied to simpler datasets where clusters can be found. We compare our methods with state-of-the art approaches and show that it yields better explanations according to several metrics, particularly in high-dimensional problems with irrelevant features, as well as when the relationship between features and target is non-linear.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2018

On the Robustness of Interpretability Methods

We argue that robustness of explanations---i.e., that similar inputs sho...
research
07/04/2022

Comparing Feature Importance and Rule Extraction for Interpretability on Text Data

Complex machine learning algorithms are used more and more often in crit...
research
05/11/2022

Exploring Local Explanations of Nonlinear Models Using Animated Linear Projections

The increased predictive power of nonlinear models comes at the cost of ...
research
11/06/2018

Deep Weighted Averaging Classifiers

Recent advances in deep learning have achieved impressive gains in class...
research
10/15/2021

Tree-based local explanations of machine learning model predictions, AraucanaXAI

Increasingly complex learning methods such as boosting, bagging and deep...
research
10/22/2021

Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach

With the widespread use of machine learning to support decision-making, ...
research
09/01/2022

Model Transparency and Interpretability : Survey and Application to the Insurance Industry

The use of models, even if efficient, must be accompanied by an understa...

Please sign up or login with your details

Forgot password? Click here to reset