Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

07/27/2020
by   Yi Zhang, et al.
10

Machine learning fairness concerns about the biases towards certain protected or sensitive group of people when addressing the target tasks. This paper studies the debiasing problem in the context of image classification tasks. Our data analysis on facial attribute recognition demonstrates (1) the attribution of model bias from imbalanced training data distribution and (2) the potential of adversarial examples in balancing data distribution. We are thus motivated to employ adversarial example to augment the training data for visual debiasing. Specifically, to ensure the adversarial generalization as well as cross-task transferability, we propose to couple the operations of target task classifier training, bias task classifier training, and adversarial example generation. The generated adversarial examples supplement the target task training dataset via balancing the distribution over bias variables in an online fashion. Results on simulated and real-world debiasing experiments demonstrate the effectiveness of the proposed solution in simultaneously improving model accuracy and fairness. Preliminary experiment on few-shot learning further shows the potential of adversarial attack-based pseudo sample generation as alternative solution to make up for the training data lackage.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 10

research
09/13/2023

Data Augmentation via Subgroup Mixup for Improving Fairness

In this work, we propose data augmentation via pairwise mixup across sub...
research
06/07/2018

Training Augmentation with Adversarial Examples for Robust Speech Recognition

This paper explores the use of adversarial examples in training speech r...
research
09/15/2022

Improving Robust Fairness via Balance Adversarial Training

Adversarial training (AT) methods are effective against adversarial atta...
research
03/14/2019

Attribution-driven Causal Analysis for Detection of Adversarial Examples

Attribution methods have been developed to explain the decision of a mac...
research
01/22/2023

Provable Unrestricted Adversarial Training without Compromise with Generalizability

Adversarial training (AT) is widely considered as the most promising str...
research
01/24/2019

Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples

State-of-the-art neural networks are vulnerable to adversarial examples;...
research
12/01/2020

Data Preprocessing to Mitigate Bias with Boosted Fair Mollifiers

In a recent paper, Celis et al. (2020) introduced a new approach to fair...

Please sign up or login with your details

Forgot password? Click here to reset