Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models

11/15/2016
by   Julius Adebayo, et al.
0

Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite potential gains in productivity and efficiency, several potential problems have yet to be addressed, particularly the potential for unintentional discrimination. We present an iterative procedure, based on orthogonal projection of input attributes, for enabling interpretability of black-box predictive models. Through our iterative procedure, one can quantify the relative dependence of a black-box model on its input attributes.The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2020

Interpretable Companions for Black-Box Models

We present an interpretable companion model for any pre-trained black-bo...
research
02/23/2016

Auditing Black-box Models for Indirect Influence

Data-trained predictive models see widespread use, but for the most part...
research
11/06/2017

Whitening Black-Box Neural Networks

Many deployed learned models are black boxes: given input, returns outpu...
research
10/20/2021

QoS-based Trust Evaluation for Data Services as a Black Box

This paper proposes a QoS-based trust evaluation model for black box dat...
research
06/29/2017

Runaway Feedback Loops in Predictive Policing

Predictive policing systems are increasingly used to determine how to al...
research
07/28/2023

LUCID-GAN: Conditional Generative Models to Locate Unfairness

Most group fairness notions detect unethical biases by computing statist...
research
11/17/2020

Augmented Fairness: An Interpretable Model Augmenting Decision-Makers' Fairness

We propose a model-agnostic approach for mitigating the prediction bias ...

Please sign up or login with your details

Forgot password? Click here to reset