Fair NLP Models with Differentially Private Text Encoders

05/12/2022
by   Gaurav Maheshwari, et al.
0

Encoded text representations often capture sensitive attributes about individuals (e.g., race or gender), which raise privacy concerns and can make downstream models unfair to certain groups. In this work, we propose FEDERATE, an approach that combines ideas from differential privacy and adversarial training to learn private text representations which also induces fairer models. We empirically evaluate the trade-off between the privacy of the representations and the fairness and accuracy of the downstream model on four NLP datasets. Our results show that FEDERATE consistently improves upon previous methods, and thus suggest that privacy and fairness can positively reinforce each other.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2021

When differential privacy meets NLP: The devil is in the detail

Differential privacy provides a formal approach to privacy of individual...
research
06/11/2020

A Variational Approach to Privacy and Fairness

In this article, we propose a new variational approach to learn private ...
research
09/26/2020

Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach

A critical concern in data-driven decision making is to build models who...
research
05/16/2018

Towards Robust and Privacy-preserving Text Representations

Written text often provides sufficient clues to identify the author, the...
research
12/07/2020

Improving Fairness and Privacy in Selection Problems

Supervised learning models have been increasingly used for making decisi...
research
10/17/2022

Stochastic Differentially Private and Fair Learning

Machine learning models are increasingly used in high-stakes decision-ma...
research
05/28/2019

Overlearning Reveals Sensitive Attributes

`Overlearning' means that a model trained for a seemingly simple objecti...

Please sign up or login with your details

Forgot password? Click here to reset