When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms

02/14/2023
by   Natasa Krco, et al.
0

Most works on the fairness of machine learning systems focus on the blind optimization of common fairness metrics, such as Demographic Parity and Equalized Odds. In this paper, we conduct a comparative study of several bias mitigation approaches to investigate their behaviors at a fine grain, the prediction level. Our objective is to characterize the differences between fair models obtained with different approaches. With comparable performances in fairness and accuracy, are the different bias mitigation approaches impacting a similar number of individuals? Do they mitigate bias in a similar way? Do they affect the same individuals when debiasing a model? Our findings show that bias mitigation approaches differ a lot in their strategies, both in the number of impacted individuals and the populations targeted. More surprisingly, we show these results even apply for several runs of the same mitigation approach. These findings raise questions about the limitations of the current group fairness metrics, as well as the arbitrariness, hence unfairness, of the whole debiasing process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2022

Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey

This paper provides a comprehensive survey of bias mitigation methods fo...
research
03/23/2022

Socially Fair Mitigation of Misinformation on Social Networks via Constraint Stochastic Optimization

Recent social networks' misinformation mitigation approaches tend to inv...
research
10/12/2018

Interpretable Fairness via Target Labels in Gaussian Process Models

Addressing fairness in machine learning models has recently attracted a ...
research
04/01/2021

fairmodels: A Flexible Tool For Bias Detection, Visualization, And Mitigation

Machine learning decision systems are getting omnipresent in our lives. ...
research
06/01/2022

How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis

Fairness in machine learning has attained significant focus due to the w...
research
12/11/2020

RENATA: REpreseNtation And Training Alteration for Bias Mitigation

We propose a novel method for enforcing AI fairness with respect to prot...
research
09/16/2022

A benchmark study on methods to ensure fair algorithmic decisions for credit scoring

The utility of machine learning in evaluating the creditworthiness of lo...

Please sign up or login with your details

Forgot password? Click here to reset