Towards Inclusive Fairness Evaluation via Eliciting Disagreement Feedback from Non-Expert Stakeholders

by   Mukund Telukunta, et al.

Traditional algorithmic fairness notions rely on label feedback, which can only be elicited from expert critics. However, in most practical applications, several non-expert stakeholders also play a major role in the system and can have distinctive opinions about the decision making philosophy. For example, in kidney placement programs, transplant surgeons are very wary about accepting kidney offers for black patients due to genetic reasons. However, non-expert stakeholders in kidney placement programs (e.g. patients, donors and their family members) may misinterpret such decisions from the perspective of social discrimination. This paper evaluates group fairness notions from the viewpoint of non-expert stakeholders, who can only provide binary agreement/disagreement feedback regarding the decision in context. Specifically, two types of group fairness notions have been identified: (i) definite notions (e.g. calibration), which can be evaluated exactly using disagreement feedback, and (ii) indefinite notions (e.g. equal opportunity) which suffer from uncertainty due to lack of label feedback. In the case of indefinite notions, bounds are presented based on disagreement rates, and an estimate is constructed based on established bounds. The efficacy of all our findings are validated empirically on real human feedback dataset.


page 1

page 2

page 3

page 4


Non-Comparative Fairness for Human-Auditing and Its Relation to Traditional Fairness Notions

Bias evaluation in machine-learning based services (MLS) based on tradit...

On Learning and Enforcing Latent Assessment Models using Binary Feedback from Human Auditors Regarding Black-Box Classifiers

Algorithmic fairness literature presents numerous mathematical notions a...

Fair Decision-making Under Uncertainty

There has been concern within the artificial intelligence (AI) community...

Enhancing the Accuracy and Fairness of Human Decision Making

Societies often rely on human experts to take a wide variety of decision...

Comparing Fairness Criteria Based on Social Outcome

Fairness in algorithmic decision-making processes is attracting increasi...

An example of prediction which complies with Demographic Parity and equalizes group-wise risks in the context of regression

Let (X, S, Y) ∈ℝ^p ×{1, 2}×ℝ be a triplet following some joint distribut...

Algorithmic Pluralism: A Structural Approach Towards Equal Opportunity

While the idea of equal opportunity enjoys a broad consensus, many disag...

Please sign up or login with your details

Forgot password? Click here to reset