Identifying biases in legal data: An algorithmic fairness perspective

09/21/2021
by   Jackson Sargent, et al.
0

The need to address representation biases and sentencing disparities in legal case data has long been recognized. Here, we study the problem of identifying and measuring biases in large-scale legal case data from an algorithmic fairness perspective. Our approach utilizes two regression models: A baseline that represents the decisions of a "typical" judge as given by the data and a "fair" judge that applies one of three fairness concepts. Comparing the decisions of the "typical" judge and the "fair" judge allows for quantifying biases across demographic groups, as we demonstrate in four case studies on criminal data from Cook County (Illinois).

READ FULL TEXT
research
03/13/2023

Are Models Trained on Indian Legal Data Fair?

Recent advances and applications of language technology and artificial i...
research
06/14/2023

Compatibility of Fairness Metrics with EU Non-Discrimination Laws: Demographic Parity Conditional Demographic Disparity

Empirical evidence suggests that algorithmic decisions driven by Machine...
research
06/29/2023

Improving Fairness in Deepfake Detection

Despite the development of effective deepfake detection models in recent...
research
06/19/2022

Adversarial Scrutiny of Evidentiary Statistical Software

The U.S. criminal legal system increasingly relies on software output to...
research
12/09/2020

A Statistical Test for Probabilistic Fairness

Algorithms are now routinely used to make consequential decisions that a...
research
06/04/2020

A Fair, Traceable, Auditable and Participatory Randomization Tool for Legal Systems

Many real-world scenarios require the random selection of one or more in...
research
06/01/2019

Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

The increasing impact of algorithmic decisions on people's lives compels...

Please sign up or login with your details

Forgot password? Click here to reset