Robust and Stable Black Box Explanations

11/12/2020
by   Himabindu Lakkaraju, et al.
0

As machine learning black boxes are increasingly being deployed in real-world applications, there has been a growing interest in developing post hoc explanations that summarize the behaviors of these black boxes. However, existing algorithms for generating such explanations have been shown to lack stability and robustness to distribution shifts. We propose a novel framework for generating robust and stable explanations of black box models based on adversarial training. Our framework optimizes a minimax objective that aims to construct the highest fidelity explanation with respect to the worst-case over a set of adversarial perturbations. We instantiate this algorithm for explanations in the form of linear models and decision sets by devising the required optimization procedures. To the best of our knowledge, this work makes the first attempt at generating post hoc explanations that are robust to a general class of adversarial perturbations that are of practical interest. Experimental evaluation with real-world and synthetic datasets demonstrates that our approach substantially improves robustness of explanations without sacrificing their fidelity on the original data distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2019

"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations

As machine learning black boxes are increasingly being deployed in criti...
research
11/06/2019

How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods

As machine learning black boxes are increasingly being deployed in domai...
research
02/21/2021

Towards the Unification and Robustness of Perturbation and Gradient Based Explanations

As machine learning black boxes are increasingly being deployed in criti...
research
03/31/2022

Interpretation of Black Box NLP Models: A Survey

An increasing number of machine learning models have been deployed in do...
research
11/12/2020

Ensuring Actionable Recourse via Adversarial Training

As machine learning models are increasingly deployed in high-stakes doma...
research
06/15/2021

S-LIME: Stabilized-LIME for Model Explanation

An increasing number of machine learning models have been deployed in do...
research
09/15/2020

Interpretable and Interactive Summaries of Actionable Recourses

As predictive models are increasingly being deployed in high-stakes deci...

Please sign up or login with your details

Forgot password? Click here to reset