In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

05/08/2020
by   Caroline Wang, et al.
16

In recent years, academics and investigative journalists have criticized certain commercial risk assessments for their black-box nature and failure to satisfy competing notions of fairness. Since then, the field of interpretable machine learning has created simple yet effective algorithms, while the field of fair machine learning has proposed various mathematical definitions of fairness. However, studies from these fields are largely independent, despite the fact that many applications of machine learning to social issues require both fairness and interpretability. We explore the intersection by revisiting the recidivism prediction problem using state-of-the-art tools from interpretable machine learning, and assessing the models for performance, interpretability, and fairness. Unlike previous works, we compare against two existing risk assessments (COMPAS and the Arnold Public Safety Assessment) and train models that output probabilities rather than binary predictions. We present multiple models that beat these risk assessments in performance, and provide a fairness analysis of these models. Our results imply that machine learning models should be trained separately for separate locations, and updated over time.

READ FULL TEXT

page 1

page 12

page 13

page 15

page 16

page 21

research
06/26/2023

Insights From Insurance for Fair Machine Learning: Responsibility, Performativity and Aggregates

We argue that insurance can act as an analogon for the social situatedne...
research
06/18/2020

Towards Threshold Invariant Fair Classification

Effective machine learning models can automatically learn useful informa...
research
05/03/2023

fairml: A Statistician's Take on Fair Machine Learning Modelling

The adoption of machine learning in applications where it is crucial to ...
research
01/09/2020

Theory In, Theory Out: The uses of social theory in machine learning for social science

Research at the intersection of machine learning and the social sciences...
research
10/28/2019

Learning Fair and Interpretable Representations via Linear Orthogonalization

To reduce human error and prejudice, many high-stakes decisions have bee...
research
10/24/2019

Almost Politically Acceptable Criminal Justice Risk Assessment

In criminal justice risk forecasting, one can prove that it is impossibl...
research
01/31/2022

Fair Wrapping for Black-box Predictions

We introduce a new family of techniques to post-process ("wrap") a black...

Please sign up or login with your details

Forgot password? Click here to reset