Identification of Bias Against People with Disabilities in Sentiment Analysis and Toxicity Detection Models

11/25/2021
by   Pranav Narayanan Venkit, et al.
0

Sociodemographic biases are a common problem for natural language processing, affecting the fairness and integrity of its applications. Within sentiment analysis, these biases may undermine sentiment predictions for texts that mention personal attributes that unbiased human readers would consider neutral. Such discrimination can have great consequences in the applications of sentiment analysis both in the public and private sectors. For example, incorrect inferences in applications like online abuse and opinion analysis in social media platforms can lead to unwanted ramifications, such as wrongful censoring, towards certain populations. In this paper, we address the discrimination against people with disabilities, PWD, done by sentiment analysis and toxicity classification models. We provide an examination of sentiment and toxicity analysis models to understand in detail how they discriminate PWD. We present the Bias Identification Test in Sentiments (BITS), a corpus of 1,126 sentences designed to probe sentiment analysis models for biases in disability. We use this corpus to demonstrate statistically significant biases in four widely used sentiment analysis tools (TextBlob, VADER, Google Cloud Natural Language API and DistilBERT) and two toxicity analysis models trained to predict toxic comments on Jigsaw challenges (Toxic comment classification and Unintended Bias in Toxic comments). The results show that all exhibit strong negative biases on sentences that mention disability. We publicly release BITS Corpus for others to identify potential biases against disability in any sentiment analysis tools and also to update the corpus to be used as a test for other sociodemographic variables as well.

READ FULL TEXT
research
07/18/2023

Automated Ableism: An Exploration of Explicit Disability Biases in Sentiment and Toxicity Analysis Models

We analyze sentiment analysis and toxicity detection models to detect th...
research
05/05/2022

CATs are Fuzzy PETs: A Corpus and Analysis of Potentially Euphemistic Terms

Euphemisms have not received much attention in natural language processi...
research
11/03/2021

End-to-End Annotator Bias Approximation on Crowdsourced Single-Label Sentiment Analysis

Sentiment analysis is often a crowdsourcing task prone to subjective lab...
research
08/03/2023

NBIAS: A Natural Language Processing Framework for Bias Identification in Text

Bias in textual data can lead to skewed interpretations and outcomes whe...
research
04/07/2022

Mapping the Multilingual Margins: Intersectional Biases of Sentiment Analysis Systems in English, Spanish, and Arabic

As natural language processing systems become more widespread, it is nec...
research
10/09/2019

Perturbation Sensitivity Analysis to Detect Unintended Model Biases

Data-driven statistical Natural Language Processing (NLP) techniques lev...
research
06/24/2019

Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis

In this work, we investigate the presence of occupational gender stereot...

Please sign up or login with your details

Forgot password? Click here to reset