The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default

02/05/2023
by   Brent Mittelstadt, et al.
0

In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from both fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off, or by bringing better performing groups down to the level of the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. This paper examines the causes and prevalence of levelling down across fairML, and explore possible justifications and criticisms based on philosophical and legal theories of equality and distributive justice, as well as equality law jurisprudence. We find that fairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. We propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. We likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default. N.B. Shortened abstract, see paper for full abstract.

READ FULL TEXT

page 17

page 18

research
06/06/2022

Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics

Group fairness metrics are an established way of assessing the fairness ...
research
11/11/2019

Fairness through Equality of Effort

Fair machine learning is receiving an increasing attention in machine le...
research
12/07/2018

From Fair Decision Making to Social Equality

The study of fairness in intelligent decision systems has mostly ignored...
research
07/06/2020

Fairness in machine learning: against false positive rate equality as a measure of fairness

As machine learning informs increasingly consequential decisions, differ...
research
07/09/2021

Escaping the "Impossibility of Fairness": From Formal to Substantive Algorithmic Fairness

In the face of compounding crises of social and economic inequality, man...
research
10/30/2019

What is Fair? Exploring Pareto-Efficiency for Fairness Constrained Classifiers

The potential for learned models to amplify existing societal biases has...
research
06/19/2020

Scalable Assessment and Mitigation Strategies for Fairness in Rankings

Motivated by industrial-scale applications, we consider two specific are...

Please sign up or login with your details

Forgot password? Click here to reset