Visualizing the Feature Importance for Black Box Models

04/18/2018
by   Giuseppe Casalicchio, et al.
0

In recent years, a large amount of model-agnostic methods to improve the transparency, trustability and interpretability of machine learning models have been developed. We introduce local feature importance as a local version of a recent model-agnostic global feature importance method. Based on local feature importance, we propose two visual tools: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average, as well as for individual observations. Our proposed methods are related to partial dependence (PD) and individual conditional expectation (ICE) plots, but visualize the expected (conditional) feature importance instead of the expected (conditional) prediction. Furthermore, we show that averaging ICI curves across observations yields a PI curve, and integrating the PI curve with respect to the distribution of the considered feature results in the global feature importance. Another contribution of our paper is the Shapley feature importance, which fairly distributes the overall performance of a model among the features according to the marginal contributions and which can be used to compare the feature importance across different models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2021

Bringing a Ruler Into the Black Box: Uncovering Feature Impact from Individual Conditional Expectation Plots

As machine learning systems become more ubiquitous, methods for understa...
research
04/01/2020

Understanding Global Feature Contributions Through Additive Importance Measures

Understanding the inner workings of complex machine learning models is a...
research
10/15/2020

Marginal Contribution Feature Importance – an Axiomatic Approach for The Natural Case

When training a predictive model over medical data, the goal is sometime...
research
07/20/2023

Conditional expectation network for SHAP

A very popular model-agnostic technique for explaining predictive models...
research
04/08/2019

Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model Agnostic Interpretations

Non-linear machine learning models often trade off a great predictive pe...
research
04/23/2021

Grouped Feature Importance and Combined Features Effect Plot

Interpretable machine learning has become a very active area of research...
research
10/13/2021

Logic Constraints to Feature Importances

In recent years, Artificial Intelligence (AI) algorithms have been prove...

Please sign up or login with your details

Forgot password? Click here to reset