Auditing Black-box Models for Indirect Influence

by   Philip Adler, et al.

Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior, and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models, or asserting that certain problematic attributes (like race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the dataset, without knowing how the models work. Our work focuses on the problem of indirect influence: how some features might indirectly influence outcomes via other, related features. As a result, we can find attribute influences even in cases where, upon further direct examination of the model, the attribute is not referred to by the model at all. Our approach does not require the black-box model to be retrained. This is important if (for example) the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence like feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available datasets and models. We also validate our procedure using techniques from interpretable learning and feature selection, as well as against other black-box auditing procedures.


Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning

We propose Partially Interpretable Estimators (PIE) which attribute a pr...

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

Motivated by the need to audit complex and black box models, there has b...

Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models

Predictive models are increasingly deployed for the purpose of determini...

The Holdout Randomization Test: Principled and Easy Black Box Feature Selection

We consider the problem of feature selection using black box predictive ...

Interpretable Few-shot Learning with Online Attribute Selection

Few-shot learning (FSL) is a challenging learning problem in which only ...

Forward and Backward Feature Selection for Query Performance Prediction

The goal of query performance prediction (QPP) is to automatically estim...

Diagnostic Curves for Black Box Models

In safety-critical applications of machine learning, it is often necessa...

Please sign up or login with your details

Forgot password? Click here to reset