Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification

06/21/2021
by   Yada Pruksachatkun, et al.
13

Existing bias mitigation methods to reduce disparities in model outcomes across cohorts have focused on data augmentation, debiasing model embeddings, or adding fairness-based optimization objectives during training. Separately, certified word substitution robustness methods have been developed to decrease the impact of spurious features and synonym substitutions on model predictions. While their end goals are different, they both aim to encourage models to make the same prediction for certain changes in the input. In this paper, we investigate the utility of certified word substitution robustness methods to improve equality of odds and equality of opportunity on multiple text classification tasks. We observe that certified robustness methods improve fairness, and using both robustness and bias mitigation methods in training results in an improvement in both fronts

READ FULL TEXT
research
05/22/2023

On Bias and Fairness in NLP: How to have a fairer text classification?

In this paper, we provide a holistic analysis of the different sources o...
research
11/20/2022

Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness

Data-driven predictive solutions predominant in commercial applications ...
research
01/30/2023

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

To mitigate gender bias in contextualized language models, different int...
research
06/19/2020

Scalable Assessment and Mitigation Strategies for Fairness in Rankings

Motivated by industrial-scale applications, we consider two specific are...
research
08/03/2021

Your fairness may vary: Group fairness of pretrained language models in toxic text classification

We study the performance-fairness trade-off in more than a dozen fine-tu...
research
05/05/2022

Optimising Equal Opportunity Fairness in Model Training

Real-world datasets often encode stereotypes and societal biases. Such b...
research
09/18/2023

Predictive Uncertainty-based Bias Mitigation in Ranking

Societal biases that are contained in retrieved documents have received ...

Please sign up or login with your details

Forgot password? Click here to reset