On Learning and Enforcing Latent Assessment Models using Binary Feedback from Human Auditors Regarding Black-Box Classifiers

by   Mukund Telukunta, et al.

Algorithmic fairness literature presents numerous mathematical notions and metrics, and also points to a tradeoff between them while satisficing some or all of them simultaneously. Furthermore, the contextual nature of fairness notions makes it difficult to automate bias evaluation in diverse algorithmic systems. Therefore, in this paper, we propose a novel model called latent assessment model (LAM) to characterize binary feedback provided by human auditors, by assuming that the auditor compares the classifier's output to his or her own intrinsic judgment for each input. We prove that individual and group fairness notions are guaranteed as long as the auditor's intrinsic judgments inherently satisfy the fairness notion at hand, and are relatively similar to the classifier's evaluations. We also demonstrate this relationship between LAM and traditional fairness notions on three well-known datasets, namely COMPAS, German credit and Adult Census Income datasets. Furthermore, we also derive the minimum number of feedback samples needed to obtain PAC learning guarantees to estimate LAM for black-box classifiers. These guarantees are also validated via training standard machine learning algorithms on real binary feedback elicited from 400 human auditors regarding COMPAS.


page 1

page 2

page 3

page 4


Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

Bias evaluation in machine-learning based services (MLS) based on tradit...

Towards Inclusive Fairness Evaluation via Eliciting Disagreement Feedback from Non-Expert Stakeholders

Traditional algorithmic fairness notions rely on label feedback, which c...

On the Identification of Fair Auditors to Evaluate Recommender Systems based on a Novel Non-Comparative Fairness Notion

Decision-support systems are information systems that offer support to p...

Multi-Differential Fairness Auditor for Black Box Classifiers

Machine learning algorithms are increasingly involved in sensitive decis...

Augmented Fairness: An Interpretable Model Augmenting Decision-Makers' Fairness

We propose a model-agnostic approach for mitigating the prediction bias ...

Fairness in the Eyes of the Data: Certifying Machine-Learning Models

We present a framework that allows to certify the fairness degree of a m...

From Utilitarian to Rawlsian Designs for Algorithmic Fairness

There is a lack of consensus within the literature as to how `fairness' ...

Please sign up or login with your details

Forgot password? Click here to reset