Sexism in the Judiciary

06/29/2021
by   Noa Baker Gillis, et al.
0

We analyze 6.7 million case law documents to determine the presence of gender bias within our judicial system. We find that current bias detectino methods in NLP are insufficient to determine gender bias in our case law database and propose an alternative approach. We show that existing algorithms' inconsistent results are consequences of prior research's definition of biases themselves. Bias detection algorithms rely on groups of words to represent bias (e.g., 'salary,' 'job,' and 'boss' to represent employment as a potentially biased theme against women in text). However, the methods to build these groups of words have several weaknesses, primarily that the word lists are based on the researchers' own intuitions. We suggest two new methods of automating the creation of word lists to represent biases. We find that our methods outperform current NLP bias detection methods. Our research improves the capabilities of NLP technology to detect bias and highlights gender biases present in influential case law. In order test our NLP bias detection method's performance, we regress our results of bias in case law against U.S census data of women's participation in the workforce in the last 100 years.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2019

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

Word embeddings are widely used in NLP for a vast range of tasks. It was...
research
06/18/2019

Measuring Bias in Contextualized Word Representations

Contextual word embeddings such as BERT have achieved state of the art p...
research
05/19/2022

Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation

Women are often perceived as junior to their male counterparts, even wit...
research
11/24/2022

Undesirable biases in NLP: Averting a crisis of measurement

As Natural Language Processing (NLP) technology rapidly develops and spr...
research
06/16/2018

Biased Embeddings from Wild Data: Measuring, Understanding and Removing

Many modern Artificial Intelligence (AI) systems make use of data embedd...
research
06/04/2023

Taught by the Internet, Exploring Bias in OpenAIs GPT3

This research delves into the current literature on bias in Natural Lang...
research
11/15/2022

Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models

The awareness and mitigation of biases are of fundamental importance for...

Please sign up or login with your details

Forgot password? Click here to reset