Justicia: A Stochastic SAT Approach to Formally Verify Fairness

09/14/2020
by   Bishwamittra Ghosh, et al.
10

As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a stochastic satisfiability (SSAT) framework, Justicia, that formally verifies different fairness measures of supervised learning algorithms with respect to the underlying data distribution. We instantiate Justicia on multiple classification and bias mitigation algorithms, and datasets to verify different fairness metrics, such as disparate impact, statistical parity, and equalized odds. Justicia is scalable, accurate, and operates on non-Boolean and compound sensitive attributes unlike existing distribution-based verifiers, such as FairSquare and VeriFair. Being distribution-based by design, Justicia is more robust than the verifiers, such as AIF360, that operate on specific test samples. We also theoretically bound the finite-sample error of the verified fairness measure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2023

A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges

The widespread adoption of Machine Learning systems, especially in more ...
research
10/24/2019

Fairness Sample Complexity and the Case for Human Intervention

With the aim of building machine learning systems that incorporate stand...
research
07/14/2020

Verification of ML Systems via Reparameterization

As machine learning is increasingly used in essential systems, it is imp...
research
09/20/2021

Algorithmic Fairness Verification with Graphical Models

In recent years, machine learning (ML) algorithms have been deployed in ...
research
09/24/2020

Legally grounded fairness objectives

Recent work has identified a number of formally incompatible operational...
research
12/17/2019

Human Comprehension of Fairness in Machine Learning

Bias in machine learning has manifested injustice in several areas, such...
research
02/09/2023

On Fairness and Stability: Is Estimator Variance a Friend or a Foe?

The error of an estimator can be decomposed into a (statistical) bias te...

Please sign up or login with your details

Forgot password? Click here to reset