Contrastive Learning for Fair Representations

by   Aili Shen, et al.

Trained classification models can unintentionally lead to biased representations and predictions, which can reinforce societal preconceptions and stereotypes. Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise. In this paper, we propose a method for mitigating bias in classifier training by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations, while instances sharing a protected attribute are forced further apart. In such a way our method learns representations which capture the task label in focused regions, while ensuring the protected attribute has diverse spread, and thus has limited impact on prediction and thereby results in fairer models. Extensive experimental results across four tasks in NLP and computer vision show (a) that our proposed method can achieve fairer representations and realises bias reductions compared with competitive baselines; and (b) that it can do so without sacrificing main task performance; (c) that it sets a new state-of-the-art performance in one task despite reducing the bias. Finally, our method is conceptually simple and agnostic to network architectures, and incurs minimal additional compute cost.


page 1

page 2

page 3

page 4


FLAC: Fairness-Aware Representation Learning by Suppressing Attribute-Class Associations

Bias in computer vision systems can perpetuate or even amplify discrimin...

Learning Fair Representations via Rate-Distortion Maximization

Text representations learned by machine learning models often encode und...

Through a fair looking-glass: mitigating bias in image datasets

With the recent growth in computer vision applications, the question of ...

Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model Bias

Contextual information is a valuable cue for Deep Neural Networks (DNNs)...

Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias

This paper strives to address image classifier bias, with a focus on bot...

Debiasing representations by removing unwanted variation due to protected attributes

We propose a regression-based approach to removing implicit biases in re...

AugLabel: Exploiting Word Representations to Augment Labels for Face Attribute Classification

Augmenting data in image space (eg. flipping, cropping etc) and activati...

Please sign up or login with your details

Forgot password? Click here to reset