Unsupervised Discovery of Implicit Gender Bias

04/17/2020
by   Anjalie Field, et al.
0

Despite their prevalence in society, social biases are difficult to define and identify, primarily because human judgements in this domain can be unreliable. Therefore, we take an unsupervised approach to identifying gender bias at a comment or sentence level, and present a model that can surface text likely to contain bias. The main challenge in this approach is forcing the model to focus on signs of implicit bias, rather than other artifacts in the data. Thus, the core of our methodology relies on reducing the influence of confounds through propensity score matching and adversarial learning. Our analysis shows how biased comments directed towards female politicians contain mixed criticisms and references to their spouses, while comments directed towards other female public figures focus on appearance and sexualization. Ultimately, our work offers a way to capture subtle biases in various domains without relying on subjective human judgements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2020

Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases

Recent advances in machine learning leverage massive datasets of unlabel...
research
09/14/2021

Uncovering Implicit Gender Bias in Narratives through Commonsense Inference

Pre-trained language models learn socially harmful biases from their tra...
research
12/22/2021

Quantifying Gender Biases Towards Politicians on Reddit

Despite attempts to increase gender parity in politics, global efforts h...
research
10/03/2021

Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models

Over the last few years, Contextualized Pre-trained Neural Language Mode...
research
03/25/2019

On Measuring Social Biases in Sentence Encoders

The Word Embedding Association Test shows that GloVe and word2vec word e...
research
03/11/2019

Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification

Unintended bias in Machine Learning can manifest as systemic differences...
research
12/22/2014

Reply to the commentary "Be careful when assuming the obvious", by P. Alday

Here we respond to some comments by Alday concerning headedness in lingu...

Please sign up or login with your details

Forgot password? Click here to reset