Explainable Global Fairness Verification of Tree-Based Classifiers

09/27/2022
by   Stefano Calzavara, et al.
0

We present a new approach to the global fairness verification of tree-based classifiers. Given a tree-based classifier and a set of sensitive features potentially leading to discrimination, our analysis synthesizes sufficient conditions for fairness, expressed as a set of traditional propositional logic formulas, which are readily understandable by human experts. The verified fairness guarantees are global, in that the formulas predicate over all the possible inputs of the classifier, rather than just a few specific test instances. Our analysis is formally proved both sound and complete. Experimental results on public datasets show that the analysis is precise, explainable to human experts and efficient enough for practical adoption.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/04/2021

Fair Training of Decision Tree Classifiers

We study the problem of formally verifying individual fairness of decisi...
research
11/01/2020

Making ML models fairer through explanations: the case of LimeOut

Algorithmic decisions are now being used on a daily basis, and based on ...
research
10/10/2022

FEAMOE: Fair, Explainable and Adaptive Mixture of Experts

Three key properties that are desired of trustworthy machine learning mo...
research
07/02/2018

Automated Directed Fairness Testing

Fairness is a critical trait in decision making. As machine-learning mod...
research
03/08/2021

Fairness seen as Global Sensitivity Analysis

Ensuring that a predictor is not biased against a sensible feature is th...
research
02/21/2019

Capuchin: Causal Database Repair for Algorithmic Fairness

Fairness is increasingly recognized as a critical component of machine l...
research
01/20/2013

Cellular Tree Classifiers

The cellular tree classifier model addresses a fundamental problem in th...

Please sign up or login with your details

Forgot password? Click here to reset