One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification

10/26/2020
by   Kenji Kobayashi, et al.
0

With the widespread adoption of machine learning in the real world, the impact of the discriminatory bias has attracted attention. In recent years, various methods to mitigate the bias have been proposed. However, most of them have not considered intersectional bias, which brings unfair situations where people belonging to specific subgroups of a protected group are treated worse when multiple sensitive attributes are taken into consideration. To mitigate this bias, in this paper, we propose a method called One-vs.-One Mitigation by applying a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification. We compare our method and the conventional fairness-aware binary classification methods in comprehensive settings using three approaches (pre-processing, in-processing, and post-processing), six metrics (the ratio and difference of demographic parity, equalized odds, and equal opportunity), and two real-world datasets (Adult and COMPAS). As a result, our method mitigates the intersectional bias much better than conventional methods in all the settings. With the result, we open up the potential of fairness-aware binary classification for solving more realistic problems occurring when there are multiple sensitive attributes.

READ FULL TEXT

page 8

page 9

research
12/14/2018

Bias Mitigation Post-processing for Individual and Group Fairness

Whereas previous post-processing approaches for increasing the fairness ...
research
05/31/2023

Bias Mitigation Methods for Binary Classification Decision-Making Systems: Survey and Recommendations

Bias mitigation methods for binary classification decision-making system...
research
02/16/2020

Convex Fairness Constrained Model Using Causal Effect Estimators

Recent years have seen much research on fairness in machine learning. He...
research
02/23/2022

Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive Features

Fairness-aware machine learning seeks to maximise utility in generating ...
research
04/14/2023

Fairness in Visual Clustering: A Novel Transformer Clustering Approach

Promoting fairness for deep clustering models in unsupervised clustering...
research
03/31/2020

A survey of bias in Machine Learning through the prism of Statistical Parity for the Adult Data Set

Applications based on Machine Learning models have now become an indispe...
research
12/11/2020

RENATA: REpreseNtation And Training Alteration for Bias Mitigation

We propose a novel method for enforcing AI fairness with respect to prot...

Please sign up or login with your details

Forgot password? Click here to reset