Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

03/27/2019
by   Aria Khademi, et al.
0

As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2022

Marrying Fairness and Explainability in Supervised Learning

Machine learning algorithms that aid human decision-making may inadverte...
research
01/13/2018

Fairness in Supervised Learning: An Information Theoretic Approach

Automated decision making systems are increasingly being used in real-wo...
research
05/15/2018

Causal Reasoning for Algorithmic Fairness

In this work, we argue for the importance of causal reasoning in creatin...
research
06/08/2023

Causal Fairness for Outcome Control

As society transitions towards an AI-based decision-making infrastructur...
research
07/24/2023

Causal Fair Machine Learning via Rank-Preserving Interventional Distributions

A decision can be defined as fair if equal individuals are treated equal...
research
06/22/2020

Deconstructing Claims of Post-Treatment Bias in Observational Studies of Discrimination

In studies of discrimination, researchers often seek to estimate a causa...
research
05/15/2021

Cohort Shapley value for algorithmic fairness

Cohort Shapley value is a model-free method of variable importance groun...

Please sign up or login with your details

Forgot password? Click here to reset