Feature Importance versus Feature Influence and What It Signifies for Explainable AI

08/07/2023
by   Kary Främling, et al.
0

When used in the context of decision theory, feature importance expresses how much changing the value of a feature can change the model outcome (or the utility of the outcome), compared to other features. Feature importance should not be confused with the feature influence used by most state-of-the-art post-hoc Explainable AI methods. Contrary to feature importance, feature influence is measured against a reference level or baseline. The Contextual Importance and Utility (CIU) method provides a unified definition of global and local feature importance that is applicable also for post-hoc explanations, where the value utility concept provides instance-level assessment of how favorable or not a feature value is for the outcome. The paper shows how CIU can be applied to both global and local explainability, assesses the fidelity and stability of different methods, and shows how explanations that use contextual importance and contextual utility can provide more expressive and flexible explanations than when using influence only.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

Contextual Importance and Utility: aTheoretical Foundation

This paper provides new theory to support to the eXplainable AI (XAI) me...
research
11/23/2021

Is Shapley Explanation for a model unique?

Shapley value has recently become a popular way to explain the predictio...
research
08/07/2017

A Characterization of Monotone Influence Measures for Data Classification

In this work we focus on the following question: how important was the i...
research
02/22/2021

Shapley values for feature selection: The good, the bad, and the axioms

The Shapley value has become popular in the Explainable AI (XAI) literat...
research
02/03/2023

A Simple Approach for Local and Global Variable Importance in Nonlinear Regression Models

The ability to interpret machine learning models has become increasingly...
research
05/11/2022

Exploring Local Explanations of Nonlinear Models Using Animated Linear Projections

The increased predictive power of nonlinear models comes at the cost of ...

Please sign up or login with your details

Forgot password? Click here to reset