TimeSHAP: Explaining Recurrent Models through Sequence Perturbations

11/30/2020
by   João Bento, et al.
0

Recurrent neural networks are a standard building block in numerous machine learning domains, from natural language processing to time-series classification. While their application has grown ubiquitous, understanding of their inner workings is still lacking. In practice, the complex decision-making in these models is seen as a black-box, creating a tension between accuracy and interpretability. Moreover, the ability to understand the reasoning process of a model is important in order to debug it and, even more so, to build trust in its decisions. Although considerable research effort has been guided towards explaining black-box models in recent years, recurrent models have received relatively little attention. Any method that aims to explain decisions from a sequence of instances should assess, not only feature importance, but also event importance, an ability that is missing from state-of-the-art explainers. In this work, we contribute to filling these gaps by presenting TimeSHAP, a model-agnostic recurrent explainer that leverages KernelSHAP's sound theoretical footing and strong empirical results. As the input sequence may be arbitrarily long, we further propose a pruning method that is shown to dramatically improve its efficiency in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2018

Please Stop Explaining Black Box Models for High Stakes Decisions

There are black box models now being used for high stakes decision-makin...
research
10/12/2021

A Rate-Distortion Framework for Explaining Black-box Model Decisions

We present the Rate-Distortion Explanation (RDE) framework, a mathematic...
research
10/21/2019

Contextual Prediction Difference Analysis

The interpretation of black-box models has been investigated in recent y...
research
05/30/2020

RelEx: A Model-Agnostic Relational Model Explainer

In recent years, considerable progress has been made on improving the in...
research
11/01/2019

Explaining black box decisions by Shapley cohort refinement

We introduce a variable importance measure to explain the importance of ...
research
06/01/2018

Producing radiologist-quality reports for interpretable artificial intelligence

Current approaches to explaining the decisions of deep learning systems ...
research
02/09/2021

Sequence-based Machine Learning Models in Jet Physics

Sequence-based modeling broadly refers to algorithms that act on data th...

Please sign up or login with your details

Forgot password? Click here to reset