Measuring Geographic Performance Disparities of Offensive Language Classifiers

09/15/2022
by   Brandon Lwowski, et al.
0

Text classifiers are applied at scale in the form of one-size-fits-all solutions. Nevertheless, many studies show that classifiers are biased regarding different languages and dialects. When measuring and discovering these biases, some gaps present themselves and should be addressed. First, “Does language, dialect, and topical content vary across geographical regions?” and secondly “If there are differences across the regions, do they impact model performance?”. We introduce a novel dataset called GeoOLID with more than 14 thousand examples across 15 geographically and demographically diverse cities to address these questions. We perform a comprehensive analysis of geographical-related content and their impact on performance disparities of offensive language detection models. Overall, we find that current models do not generalize across locations. Likewise, we show that while offensive language models produce false positives on African American English, model performance is not correlated with each city's minority population proportions. Warning: This paper contains offensive language.

READ FULL TEXT

page 8

page 9

research
02/24/2023

Fairness in Language Models Beyond English: Gaps and Challenges

With language models becoming increasingly ubiquitous, it has become ess...
research
09/11/2023

Evaluating the Deductive Competence of Large Language Models

The development of highly fluent large language models (LLMs) has prompt...
research
01/29/2021

Challenges in Automated Debiasing for Toxic Language Detection

Biased associations have been a challenge in the development of classifi...
research
05/23/2023

Evaluation of African American Language Bias in Natural Language Generation

We evaluate how well LLMs understand African American Language (AAL) in ...
research
02/28/2021

Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP

When trained on large, unfiltered crawls from the internet, language mod...
research
07/20/2023

The Extractive-Abstractive Axis: Measuring Content "Borrowing" in Generative Language Models

Generative language models produce highly abstractive outputs by design,...
research
10/07/2022

A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter

Harmful content detection models tend to have higher false positive rate...

Please sign up or login with your details

Forgot password? Click here to reset