A Formal Approach to Explainability

01/15/2020
by   Lior Wolf, et al.
0

We regard explanations as a blending of the input sample and the model's output and offer a few definitions that capture various desired properties of the function that generates these explanations. We study the links between these properties and between explanation-generating functions and intermediate representations of learned models and are able to show, for example, that if the activations of a given layer are consistent with an explanation, then so do all other subsequent layers. In addition, we study the intersection and union of explanations as a way to construct new explanations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset