Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding

09/24/2021
by   Zexue He, et al.
0

Written language carries explicit and implicit biases that can distract from meaningful signals. For example, letters of reference may describe male and female candidates differently, or their writing style may indirectly reveal demographic characteristics. At best, such biases distract from the meaningful content of the text; at worst they can lead to unfair outcomes. We investigate the challenge of re-generating input sentences to 'neutralize' sensitive attributes while maintaining the semantic meaning of the original text (e.g. is the candidate qualified?). We propose a gradient-based rewriting framework, Detect and Perturb to Neutralize (DEPEN), that first detects sensitive components and masks them for regeneration, then perturbs the generation model at decoding time under a neutralizing constraint that pushes the (predicted) distribution of sensitive attributes towards a uniform distribution. Our experiments in two different scenarios show that DEPEN can regenerate fluent alternatives that are neutral in the sensitive attribute while maintaining the semantics of other attributes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2021

Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness

Text style can reveal sensitive attributes of the author (e.g. race or a...
research
03/30/2022

Robust Reputation Independence in Ranking Systems for Multiple Sensitive Attributes

Ranking systems have an unprecedented influence on how and what informat...
research
05/17/2023

Shielded Representations: Protecting Sensitive Attributes Through Iterative Gradient-Based Projection

Natural language processing models tend to learn and encode social biase...
research
05/12/2023

Surfacing Biases in Large Language Models using Contrastive Input Decoding

Ensuring that large language models (LMs) are fair, robust and useful re...
research
09/15/2021

Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG

Ever since neural models were adopted in data-to-text language generatio...
research
05/28/2019

Overlearning Reveals Sensitive Attributes

`Overlearning' means that a model trained for a seemingly simple objecti...
research
11/03/2018

Content preserving text generation with attribute controls

In this work, we address the problem of modifying textual attributes of ...

Please sign up or login with your details

Forgot password? Click here to reset