Variance, Self-Consistency, and Arbitrariness in Fair Classification

01/27/2023
by   A. Feder Cooper, et al.
0

In fair classification, it is common to train a model, and to compare and correct subgroup-specific error rates for disparities. However, even if a model's classification decisions satisfy a fairness metric, it is not necessarily the case that these decisions are equally confident. This becomes clear if we measure variance: We can fix everything in the learning process except the subset of training data, train multiple models, measure (dis)agreement in predictions for each test example, and interpret disagreement to mean that the learning process is more unstable with respect to its classification decision. Empirically, some decisions can in fact be so unstable that they are effectively arbitrary. To reduce this arbitrariness, we formalize a notion of self-consistency of a learning process, develop an ensembling algorithm that provably increases self-consistency, and empirically demonstrate its utility to often improve both fairness and accuracy. Further, our evaluation reveals a startling observation: Applying ensembling to common fair classification benchmarks can significantly reduce subgroup error rate disparities, without employing common pre-, in-, or post-processing fairness interventions. Taken together, our results indicate that variance, particularly on small datasets, can muddle the reliability of conclusions about fairness. One solution is to develop larger benchmark tasks. To this end, we release a toolkit that makes the Home Mortgage Disclosure Act datasets easily usable for future research.

READ FULL TEXT
research
06/01/2022

FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks

Algorithmic decision making driven by neural networks has become very pr...
research
08/01/2022

GetFair: Generalized Fairness Tuning of Classification Models

We present GetFair, a novel framework for tuning fairness of classificat...
research
06/06/2022

FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data

Algorithmic fairness plays an important role in machine learning and imp...
research
09/04/2020

Fair and Useful Cohort Selection

As important decisions about the distribution of society's resources bec...
research
10/13/2020

FaiR-N: Fair and Robust Neural Networks for Structured Data

Fairness in machine learning is crucial when individuals are subject to ...
research
10/25/2021

Fair Enough: Searching for Sufficient Measures of Fairness

Testing machine learning software for ethical bias has become a pressing...

Please sign up or login with your details

Forgot password? Click here to reset