RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models

06/07/2021
by   Soumya Barikeri, et al.
0

Text representation models are prone to exhibit a range of societal biases, reflecting the non-controlled and biased nature of the underlying pretraining data, which consequently leads to severe ethical issues and even bias amplification. Recent work has predominantly focused on measuring and mitigating bias in pretrained language models. Surprisingly, the landscape of bias measurements and mitigation resources and methods for conversational language models is still very scarce: it is limited to only a few types of bias, artificially constructed resources, and completely ignores the impact that debiasing methods may have on the final performance in dialog tasks, e.g., conversational response generation. In this work, we present RedditBias, the first conversational data set grounded in the actual human conversations from Reddit, allowing for bias measurement and mitigation across four important bias dimensions: gender, race, religion, and queerness. Further, we develop an evaluation framework which simultaneously 1) measures bias on the developed RedditBias resource, and 2) evaluates model capability in dialog tasks after model debiasing. We use the evaluation framework to benchmark the widely used conversational DialoGPT model along with the adaptations of four debiasing methods. Our results indicate that DialoGPT is biased with respect to religious groups and that some debiasing techniques can remove this bias while preserving downstream task performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2023

CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models

Warning: This paper contains content that may be offensive or upsetting....
research
09/30/2020

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

Pretrained language models, especially masked language models (MLMs) hav...
research
04/30/2021

Mitigating Political Bias in Language Models Through Reinforced Calibration

Current large-scale language models can be politically biased as a resul...
research
05/21/2023

BiasAsker: Measuring the Bias in Conversational AI System

Powered by advanced Artificial Intelligence (AI) techniques, conversatio...
research
10/16/2021

ASR4REAL: An extended benchmark for speech models

Popular ASR benchmarks such as Librispeech and Switchboard are limited i...
research
08/31/2023

Conversational Swarm Intelligence, a Pilot Study

Conversational Swarm Intelligence (CSI) is a new method for enabling lar...
research
09/07/2021

Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models

All AI models are susceptible to learning biases in data that they are t...

Please sign up or login with your details

Forgot password? Click here to reset