FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders

03/11/2021
by   Pengyu Cheng, et al.
17

Pretrained text encoders, such as BERT, have been applied increasingly in various natural language processing (NLP) tasks, and have recently demonstrated significant performance gains. However, recent studies have demonstrated the existence of social bias in these pretrained NLP models. Although prior works have made progress on word-level debiasing, improved sentence-level fairness of pretrained encoders still lacks exploration. In this paper, we proposed the first neural debiasing method for a pretrained sentence encoder, which transforms the pretrained encoder outputs into debiased representations via a fair filter (FairFil) network. To learn the FairFil, we introduce a contrastive learning framework that not only minimizes the correlation between filtered embeddings and bias words but also preserves rich semantic information of the original sentences. On real-world datasets, our FairFil effectively reduces the bias degree of pretrained text encoders, while continuously showing desirable performance on downstream tasks. Moreover, our post-hoc method does not require any retraining of the text encoders, further enlarging FairFil's application space.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2021

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

We provide the first exploration of text-to-text transformers (T5) sente...
research
01/15/2021

TextGNN: Improving Text Encoder via Graph Neural Network in Sponsored Search

Text encoders based on C-DSSM or transformers have demonstrated strong p...
research
10/02/2020

Which *BERT? A Survey Organizing Contextualized Encoders

Pretrained contextualized text encoders are now a staple of the NLP comm...
research
09/01/2019

Higher-order Comparisons of Sentence Encoder Representations

Representational Similarity Analysis (RSA) is a technique developed by n...
research
09/20/2018

Predicting Argumenthood of English Preposition Phrases

Distinguishing between core and non-core dependents (i.e., arguments and...
research
11/09/2019

ConveRT: Efficient and Accurate Conversational Representations from Transformers

General-purpose pretrained sentence encoders such as BERT are not ideal ...
research
04/29/2021

Text-to-Text Multi-view Learning for Passage Re-ranking

Recently, much progress in natural language processing has been driven b...

Please sign up or login with your details

Forgot password? Click here to reset