fAux: Testing Individual Fairness via Gradient Alignment

10/10/2022
by   Giuseppe Castiglione, et al.
0

Machine learning models are vulnerable to biases that result in unfair treatment of individuals from different populations. Recent work that aims to test a model's fairness at the individual level either relies on domain knowledge to choose metrics, or on input transformations that risk generating out-of-domain samples. We describe a new approach for testing individual fairness that does not have either requirement. We propose a novel criterion for evaluating individual fairness and develop a practical testing method based on this criterion which we call fAux (pronounced fox). This is based on comparing the derivatives of the predictions of the model to be tested with those of an auxiliary model, which predicts the protected variable from the observed data. We show that the proposed method effectively identifies discrimination on both synthetic and real-world datasets, and has quantitative and qualitative advantages over contemporary methods.

READ FULL TEXT

page 1

page 6

page 14

page 15

research
07/17/2021

Automatic Fairness Testing of Neural Classifiers through Adversarial Sampling

Although deep learning has demonstrated astonishing performance in many ...
research
09/17/2022

Enhanced Fairness Testing via Generating Effective Initial Individual Discriminatory Instances

Fairness testing aims at mitigating unintended discrimination in the dec...
research
03/01/2022

Explainability for identification of vulnerable groups in machine learning models

If a prediction model identifies vulnerable individuals or groups, the u...
research
07/02/2019

Operationalizing Individual Fairness with Pairwise Fair Representations

We revisit the notion of individual fairness proposed by Dwork et al. A ...
research
12/05/2020

FAIROD: Fairness-aware Outlier Detection

Fairness and Outlier Detection (OD) are closely related, as it is exactl...
research
12/08/2022

Fairify: Fairness Verification of Neural Networks

Fairness of machine learning (ML) software has become a major concern in...
research
11/17/2021

Fairness Testing of Deep Image Classification with Adequacy Metrics

As deep image classification applications, e.g., face recognition, becom...

Please sign up or login with your details

Forgot password? Click here to reset