FairBatch: Batch Selection for Model Fairness

12/03/2020
by   Yuji Roh, et al.
0

Training a fair machine learning model is essential to prevent demographic disparity. Existing techniques for improving model fairness require broad changes in either data preprocessing or model training, rendering themselves difficult-to-adopt for potentially already complex machine learning systems. We address this problem via the lens of bilevel optimization. While keeping the standard training algorithm as an inner optimizer, we incorporate an outer optimizer so as to equip the inner problem with an additional functionality: Adaptively selecting minibatch sizes for the purpose of improving model fairness. Our batch selection algorithm, which we call FairBatch, implements this optimization and supports prominent fairness measures: equal opportunity, equalized odds, and demographic parity. FairBatch comes with a significant implementation benefit – it does not require any modification to data preprocessing or model training. For instance, a single-line change of PyTorch code for replacing batch selection part of model training suffices to employ FairBatch. Our experiments conducted both on synthetic and benchmark real data demonstrate that FairBatch can provide such functionalities while achieving comparable (or even greater) performances against the state of the arts. Furthermore, FairBatch can readily improve fairness of any pre-trained model simply via fine-tuning. It is also compatible with existing batch selection techniques intended for different purposes, such as faster convergence, thus gracefully achieving multiple purposes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/13/2021

OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

Machine learning (ML) is increasingly being used to make decisions in ou...
research
11/07/2020

On the Privacy Risks of Algorithmic Fairness

Algorithmic fairness and privacy are essential elements of trustworthy m...
research
10/26/2021

Fair Sequential Selection Using Supervised Learning Models

We consider a selection problem where sequentially arrived applicants ap...
research
10/27/2021

Sample Selection for Fair and Robust Training

Fairness and robustness are critical elements of Trustworthy AI that nee...
research
09/15/2023

Adaptive Priority Reweighing for Generalizing Fairness Improvement

With the increasing penetration of machine learning applications in crit...
research
12/13/2022

Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting

In consequential decision-making applications, mitigating unwanted biase...
research
06/12/2021

Disrupting Model Training with Adversarial Shortcuts

When data is publicly released for human consumption, it is unclear how ...

Please sign up or login with your details

Forgot password? Click here to reset