iFlipper: Label Flipping for Individual Fairness

09/15/2022
by   Hantian Zhang, et al.
0

As machine learning becomes prevalent, mitigating any unfairness present in the training data becomes critical. Among the various notions of fairness, this paper focuses on the well-known individual fairness, which states that similar individuals should be treated similarly. While individual fairness can be improved when training a model (in-processing), we contend that fixing the data before model training (pre-processing) is a more fundamental solution. In particular, we show that label flipping is an effective pre-processing technique for improving individual fairness. Our system iFlipper solves the optimization problem of minimally flipping labels given a limit to the individual fairness violations, where a violation occurs when two similar examples in the training data have different labels. We first prove that the problem is NP-hard. We then propose an approximate linear programming algorithm and provide theoretical guarantees on how close its result is to the optimal solution in terms of the number of label flips. We also propose techniques for making the linear programming solution more optimal without exceeding the violations limit. Experiments on real datasets show that iFlipper significantly outperforms other pre-processing baselines in terms of individual fairness and accuracy on unseen test sets. In addition, iFlipper can be combined with in-processing techniques for even better results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2022

Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning Fairness?

As machine learning (ML) systems get adopted in more critical areas, it ...
research
11/29/2022

Learning Antidote Data to Individual Unfairness

Fairness is an essential factor for machine learning systems deployed in...
research
02/04/2023

Matrix Estimation for Individual Fairness

In recent years, multiple notions of algorithmic fairness have arisen. O...
research
02/05/2023

Improving Fair Training under Correlation Shifts

Model fairness is an essential element for Trustworthy AI. While many te...
research
08/08/2017

A discriminative view of MRF pre-processing algorithms

While Markov Random Fields (MRFs) are widely used in computer vision, th...
research
08/10/2023

Inter-Rater Reliability is Individual Fairness

In this note, a connection between inter-rater reliability and individua...
research
03/29/2023

Fairness-Aware Data Valuation for Supervised Learning

Data valuation is a ML field that studies the value of training instance...

Please sign up or login with your details

Forgot password? Click here to reset