Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning

08/07/2019
by   Yair Horesh, et al.
0

As AI systems develop in complexity it is becoming increasingly hard to ensure non-discrimination on the basis of protected attributes such as gender, age, and race. Many recent methods have been developed for dealing with this issue as long as the protected attribute is explicitly available for the algorithm. We address the setting where this is not the case (with either no explicit protected attribute, or a large set of them). Instead, we assume the existence of a fair domain expert capable of generating an extension to the labeled dataset - a small set of example pairs, each having a different value on a subset of protected variables, but judged to warrant a similar model response. We define a performance metric - paired consistency. Paired consistency measures how close the output (assigned by a classifier or a regressor) is on these carefully selected pairs of examples for which fairness dictates identical decisions. In some cases consistency can be embedded within the loss function during optimization and serve as a fairness regularizer, and in others it is a tool for fair model selection. We demonstrate our method using the well studied Income Census dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2021

Fair Balance: Mitigating Machine Learning Bias Against Multiple Protected Attributes With Data Balancing

This paper aims to improve machine learning fairness on multiple protect...
research
11/09/2020

Mitigating Bias in Set Selection with Noisy Protected Attributes

Subset selection algorithms are ubiquitous in AI-driven applications, in...
research
07/06/2023

When Fair Classification Meets Noisy Protected Attributes

The operationalization of algorithmic fairness comes with several practi...
research
09/10/2018

Automated Test Generation to Detect Individual Discrimination in AI Models

Dependability on AI models is of utmost importance to ensure full accept...
research
04/09/2022

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

We show that deep neural networks that satisfy demographic parity do so ...
research
06/30/2021

Unaware Fairness: Hierarchical Random Forest for Protected Classes

Procedural fairness has been a public concern, which leads to controvers...
research
02/12/2023

Multi-dimensional discrimination in Law and Machine Learning – A comparative overview

AI-driven decision-making can lead to discrimination against certain ind...

Please sign up or login with your details

Forgot password? Click here to reset