A Generic and Model-Agnostic Exemplar Synthetization Framework for Explainable AI

06/06/2020
by   Antonio Barbalau, et al.
0

With the growing complexity of deep learning methods adopted in practical applications, there is an increasing and stringent need to explain and interpret the decisions of such methods. In this work, we focus on explainable AI and propose a novel generic and model-agnostic framework for synthesizing input exemplars that maximize a desired response from a machine learning model. To this end, we use a generative model, which acts as a prior for generating data, and traverse its latent space using a novel evolutionary strategy with momentum updates. Our framework is generic because (i) it can employ any underlying generator, e.g. Variational Auto-Encoders (VAEs) or Generative Adversarial Networks (GANs), and (ii) it can be applied to any input data, e.g. images, text samples or tabular data. Since we use a zero-order optimization method, our framework is model-agnostic, in the sense that the machine learning model that we aim to explain is a black-box. We stress out that our novel framework does not require access or knowledge of the internal structure or the training data of the black-box model. We conduct experiments with two generative models, VAEs and GANs, and synthesize exemplars for various data formats, image, text and tabular, demonstrating that our framework is generic. We also employ our prototype synthetization framework on various black-box models, for which we only know the input and the output formats, showing that it is model-agnostic. Moreover, we compare our framework (available at https://github.com/antoniobarbalau/exemplar) with a model-dependent approach based on gradient descent, proving that our framework obtains equally-good exemplars in a shorter computational time.

READ FULL TEXT
research
10/21/2020

Black-Box Ripper: Copying black-box models using generative evolutionary algorithms

We study the task of replicating the functionality of black-box neural m...
research
09/19/2022

A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models

The widespread use of black-box AI models has raised the need for algori...
research
11/23/2019

Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference

Inferring the latent variable generating a given test sample is a challe...
research
08/09/2023

Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution

We address the task of probabilistic anomaly attribution in the black-bo...
research
04/04/2020

The equivalence between Stein variational gradient descent and black-box variational inference

We formalize an equivalence between two popular methods for Bayesian inf...
research
02/08/2023

Adversarial Prompting for Black Box Foundation Models

Prompting interfaces allow users to quickly adjust the output of generat...
research
07/28/2023

LUCID-GAN: Conditional Generative Models to Locate Unfairness

Most group fairness notions detect unethical biases by computing statist...

Please sign up or login with your details

Forgot password? Click here to reset