Bounding and Approximating Intersectional Fairness through Marginal Fairness

06/12/2022
by   Mathieu Molina, et al.
0

Discrimination in machine learning often arises along multiple dimensions (a.k.a. protected attributes); it is then desirable to ensure intersectional fairness – i.e., that no subgroup is discriminated against. It is known that ensuring marginal fairness for every dimension independently is not sufficient in general. Due to the exponential number of subgroups, however, directly measuring intersectional fairness from data is impossible. In this paper, our primary goal is to understand in detail the relationship between marginal and intersectional fairness through statistical analysis. We first identify a set of sufficient conditions under which an exact relationship can be obtained. Then, we prove bounds (easily computable through marginal fairness and other meaningful statistical quantities) in high-probability on intersectional fairness in the general case. Beyond their descriptive value, we show that these theoretical bounds can be leveraged to derive a heuristic improving the approximation and bounds of intersectional fairness by choosing, in a relevant manner, protected attributes for which we describe intersectional subgroups. Finally, we test the performance of our approximations and bounds on real and synthetic data-sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2022

Multiple Attribute Fairness: Application to Fraud Detection

We propose a fairness measure relaxing the equality conditions in the po...
research
02/24/2023

Intersectional Fairness: A Fractal Approach

The issue of fairness in AI has received an increasing amount of attenti...
research
11/09/2020

Mitigating Bias in Set Selection with Noisy Protected Attributes

Subset selection algorithms are ubiquitous in AI-driven applications, in...
research
08/24/2018

An Empirical Study of Rich Subgroup Fairness for Machine Learning

Kearns et al. [2018] recently proposed a notion of rich subgroup fairnes...
research
11/18/2018

Bayesian Modeling of Intersectional Fairness: The Variance of Bias

Intersectionality is a framework that analyzes how interlocking systems ...
research
01/16/2020

Fairness Measures for Regression via Probabilistic Classification

Algorithmic fairness involves expressing notions such as equity, or reas...
research
04/21/2022

Ultra-marginal Feature Importance

Scientists frequently prioritize learning from data rather than training...

Please sign up or login with your details

Forgot password? Click here to reset