Kernel Dependence Regularizers and Gaussian Processes with Applications to Algorithmic Fairness

11/11/2019
by   Zhu Li, et al.
20

Current adoption of machine learning in industrial, societal and economical activities has raised concerns about the fairness, equity and ethics of automated decisions. Predictive models are often developed using biased datasets and thus retain or even exacerbate biases in their decisions and recommendations. Removing the sensitive covariates, such as gender or race, is insufficient to remedy this issue since the biases may be retained due to other related covariates. We present a regularization approach to this problem that trades off predictive accuracy of the learned models (with respect to biased labels) for the fairness in terms of statistical parity, i.e. independence of the decisions from the sensitive covariates. In particular, we consider a general framework of regularized empirical risk minimization over reproducing kernel Hilbert spaces and impose an additional regularizer of dependence between predictors and sensitive covariates using kernel-based measures of dependence, namely the Hilbert-Schmidt Independence Criterion (HSIC) and its normalized version. This approach leads to a closed-form solution in the case of squared loss, i.e. ridge regression. Moreover, we show that the dependence regularizer has an interpretation as modifying the corresponding Gaussian process (GP) prior. As a consequence, a GP model with a prior that encourages fairness to sensitive variables can be derived, allowing principled hyperparameter selection and studying of the relative relevance of covariates under fairness constraints. Experimental results in synthetic examples and in real problems of income and crime prediction illustrate the potential of the approach to improve fairness of automated decisions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2017

Fair Kernel Learning

New social and economic activities massively exploit big data and machin...
research
02/07/2020

Oblivious Data for Fairness with Kernels

We investigate the problem of algorithmic fairness in the case where sen...
research
09/06/2019

Approaching Machine Learning Fairness through Adversarial Network

Fairness is becoming a rising concern w.r.t. machine learning model perf...
research
10/05/2019

The Impact of Data Preparation on the Fairness of Software Systems

Machine learning models are widely adopted in scenarios that directly af...
research
05/11/2023

A statistical approach to detect sensitive features in a group fairness setting

The use of machine learning models in decision support systems with high...
research
02/23/2018

Empirical Risk Minimization under Fairness Constraints

We address the problem of algorithmic fairness: ensuring that sensitive ...
research
11/02/2016

Sensitivity Maps of the Hilbert-Schmidt Independence Criterion

Kernel dependence measures yield accurate estimates of nonlinear relatio...

Please sign up or login with your details

Forgot password? Click here to reset