Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation

11/10/2019
by   Emily Dinan, et al.
0

Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze the presence of gender bias in dialogue and examine the subsequent effect on generative chitchat dialogue models. Based on this analysis, we propose a combination of three techniques to mitigate bias: counterfactual data augmentation, targeted data collection, and conditional training. We focus on the multi-player text-based fantasy adventure dataset LIGHT as a testbed for our work. LIGHT contains gender imbalance between male and female characters with around 1.6 times as many male characters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We show that (i) our proposed techniques mitigate gender bias by balancing the genderedness of generated dialogue utterances; and (ii) they work particularly well in combination. Further, we show through various metrics—such as quantity of gendered words, a dialogue safety classifier, and human evaluation—that our models generate less gendered, but still engaging chitchat responses.

READ FULL TEXT
research
09/28/2020

Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning

Dialogue systems play an increasingly important role in various aspects ...
research
09/07/2021

Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models

All AI models are susceptible to learning biases in data that they are t...
research
09/13/2023

In-Contextual Bias Suppression for Large Language Models

Despite their impressive performance in a wide range of NLP tasks, Large...
research
05/03/2023

Fairness in AI Systems: Mitigating gender bias from language-vision models

Our society is plagued by several biases, including racial biases, caste...
research
01/09/2022

Uncovering the Source of Machine Bias

We develop a structural econometric model to capture the decision dynami...
research
09/02/2019

It's All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution

This paper treats gender bias latent in word embeddings. Previous mitiga...
research
05/19/2023

Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis in Four Languages

Sentiment analysis (SA) systems are used in many products and hundreds o...

Please sign up or login with your details

Forgot password? Click here to reset