Fair Wrapping for Black-box Predictions

01/31/2022
by   Alexander Soen, et al.
2

We introduce a new family of techniques to post-process ("wrap") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimisation can correct any twist in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an α-tree, which modifies the prediction. We provide two generic boosting algorithms to learn α-trees. We show that our modification has appealing properties in terms of composition ofα-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value at risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets.

READ FULL TEXT
research
12/20/2020

Biased Models Have Biased Explanations

We study fairness in Machine Learning (FairML) through the lens of attri...
research
05/31/2018

Multiaccuracy: Black-Box Post-Processing for Fairness in Classification

Machine learning predictors are successfully deployed in applications ra...
research
05/08/2020

In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

In recent years, academics and investigative journalists have criticized...
research
07/29/2022

SHAP for additively modeled features in a boosted trees model

An important technique to explore a black-box machine learning (ML) mode...
research
07/26/2018

Deriving Information Acquisition Criteria For Sequentially Inferring The Expected Value Of A Black-Box Function

Acquiring information about noisy expensive black-box functions (compute...
research
05/31/2021

Rawlsian Fair Adaptation of Deep Learning Classifiers

Group-fairness in classification aims for equality of a predictive utili...

Please sign up or login with your details

Forgot password? Click here to reset