Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections

10/05/2018
by   Tomi Peltola, et al.
0

We introduce a method, KL-LIME, for explaining predictions of Bayesian predictive models by projecting the information in the predictive distribution locally to a simpler, interpretable explanation model. The proposed approach combines the recent Local Interpretable Model-agnostic Explanations (LIME) method with ideas from Bayesian projection predictive variable selection methods. The information theoretic basis helps in navigating the trade-off between explanation fidelity and complexity. We demonstrate the method in explaining MNIST digit classifications made by a Bayesian deep convolutional neural network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset