An Empirical Study of Rich Subgroup Fairness for Machine Learning

08/24/2018
by   Michael Kearns, et al.
0

Kearns et al. [2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice.

READ FULL TEXT
research
11/14/2017

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

The most prevalent notions of fairness in machine learning are statistic...
research
06/22/2020

Distributional Individual Fairness in Clustering

In this paper, we initiate the study of fair clustering that ensures dis...
research
07/17/2023

Navigating Fairness Measures and Trade-Offs

In order to monitor and prevent bias in AI systems we can use a wide ran...
research
12/01/2020

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

In a recent paper, Celis et al. (2020) introduced a new approach to fair...
research
03/10/2023

Weighted Notions of Fairness with Binary Supermodular Chores

We study the problem of allocating indivisible chores among agents with ...
research
06/12/2022

Bounding and Approximating Intersectional Fairness through Marginal Fairness

Discrimination in machine learning often arises along multiple dimension...
research
07/14/2021

On the impossibility of non-trivial accuracy under fairness constraints

One of the main concerns about fairness in machine learning (ML) is that...

Please sign up or login with your details

Forgot password? Click here to reset