Gradient Reversal Against Discrimination

07/01/2018
by   Edward Raff, et al.
0

No methods currently exist for making arbitrary neural networks fair. In this work we introduce GRAD, a new and simplified method to producing fair neural networks that can be used for auto-encoding fair representations or directly with predictive networks. It is easy to implement and add to existing architectures, has only one (insensitive) hyper-parameter, and provides improved individual and group fairness. We use the flexibility of GRAD to demonstrate multi-attribute protection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2018

FairGAN: Fairness-aware Generative Adversarial Networks

Fairness-aware learning is increasingly important in data mining. Discri...
research
04/21/2023

Individual Fairness in Bayesian Neural Networks

We study Individual Fairness (IF) for Bayesian neural networks (BNNs). S...
research
02/21/2020

Learning Fairness-aware Relational Structures

The development of fair machine learning models that effectively avert b...
research
02/15/2021

Fair and Optimal Cohort Selection for Linear Utilities

The rise of algorithmic decision-making has created an explosion of rese...
research
12/21/2017

Fair Forests: Regularized Tree Induction to Minimize Model Bias

The potential lack of fairness in the outputs of machine learning algori...
research
08/13/2022

A Novel Regularization Approach to Fair ML

A number of methods have been introduced for the fair ML issue, most of ...

Please sign up or login with your details

Forgot password? Click here to reset