MMD-B-Fair: Learning Fair Representations with Statistical Testing

11/15/2022
by   Namrata Deka, et al.
0

We introduce a method, MMD-B-Fair, to learn fair representations of data via kernel two-sample testing. We find neural features of our data where a maximum mean discrepancy (MMD) test cannot distinguish between different values of sensitive attributes, while preserving information about the target. Minimizing the power of an MMD test is more difficult than maximizing it (as done in previous work), because the test threshold's complex behavior cannot be simply ignored. Our method exploits the simple asymptotics of block testing schemes to efficiently find fair representations without requiring the complex adversarial optimization or generative modelling schemes widely used by existing work on fair representation learning. We evaluate our approach on various datasets, showing its ability to "hide" information about sensitive attributes, and its effectiveness in downstream transfer tasks.

READ FULL TEXT
research
06/06/2019

Flexibly Fair Representation Learning by Disentanglement

We consider the problem of learning representations that achieve group a...
research
03/30/2022

Learning Fair Models without Sensitive Attributes: A Generative Approach

Most existing fair classifiers rely on sensitive attributes to achieve f...
research
03/16/2022

Adversarial Learned Fair Representations using Dampening and Stacking

As more decisions in our daily life become automated, the need to have m...
research
11/03/2015

The Variational Fair Autoencoder

We investigate the problem of learning representations that are invarian...
research
01/17/2022

Fair Interpretable Learning via Correction Vectors

Neural network architectures have been extensively employed in the fair ...
research
09/12/2021

Adversarial Representation Learning With Closed-Form Solvers

Adversarial representation learning aims to learn data representations f...
research
01/11/2021

Learning to Ignore: Fair and Task Independent Representations

Training fair machine learning models, aiming for their interpretability...

Please sign up or login with your details

Forgot password? Click here to reset