FairBatch: Batch Selection for Model Fairness

by   Yuji Roh, et al.

Training a fair machine learning model is essential to prevent demographic disparity. Existing techniques for improving model fairness require broad changes in either data preprocessing or model training, rendering themselves difficult-to-adopt for potentially already complex machine learning systems. We address this problem via the lens of bilevel optimization. While keeping the standard training algorithm as an inner optimizer, we incorporate an outer optimizer so as to equip the inner problem with an additional functionality: Adaptively selecting minibatch sizes for the purpose of improving model fairness. Our batch selection algorithm, which we call FairBatch, implements this optimization and supports prominent fairness measures: equal opportunity, equalized odds, and demographic parity. FairBatch comes with a significant implementation benefit – it does not require any modification to data preprocessing or model training. For instance, a single-line change of PyTorch code for replacing batch selection part of model training suffices to employ FairBatch. Our experiments conducted both on synthetic and benchmark real data demonstrate that FairBatch can provide such functionalities while achieving comparable (or even greater) performances against the state of the arts. Furthermore, FairBatch can readily improve fairness of any pre-trained model simply via fine-tuning. It is also compatible with existing batch selection techniques intended for different purposes, such as faster convergence, thus gracefully achieving multiple purposes.


page 1

page 2

page 3

page 4


OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

Machine learning (ML) is increasingly being used to make decisions in ou...

On the Privacy Risks of Algorithmic Fairness

Algorithmic fairness and privacy are essential elements of trustworthy m...

Fair Sequential Selection Using Supervised Learning Models

We consider a selection problem where sequentially arrived applicants ap...

Sample Selection for Fair and Robust Training

Fairness and robustness are critical elements of Trustworthy AI that nee...

Adaptive Priority Reweighing for Generalizing Fairness Improvement

With the increasing penetration of machine learning applications in crit...

Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting

In consequential decision-making applications, mitigating unwanted biase...

Disrupting Model Training with Adversarial Shortcuts

When data is publicly released for human consumption, it is unclear how ...

Please sign up or login with your details

Forgot password? Click here to reset