Predictive Uncertainty-based Bias Mitigation in Ranking

09/18/2023
by   Maria Heuss, et al.
0

Societal biases that are contained in retrieved documents have received increased interest. Such biases, which are often prevalent in the training data and learned by the model, can cause societal harms, by misrepresenting certain groups, and by enforcing stereotypes. Mitigating such biases demands algorithms that balance the trade-off between maximized utility for the user with fairness objectives, which incentivize unbiased rankings. Prior work on bias mitigation often assumes that ranking scores, which correspond to the utility that a document holds for a user, can be accurately determined. In reality, there is always a degree of uncertainty in the estimate of expected document utility. This uncertainty can be approximated by viewing ranking models through a Bayesian perspective, where the standard deterministic score becomes a distribution. In this work, we investigate whether uncertainty estimates can be used to decrease the amount of bias in the ranked results, while minimizing loss in measured utility. We introduce a simple method that uses the uncertainty of the ranking scores for an uncertainty-aware, post hoc approach to bias mitigation. We compare our proposed method with existing baselines for bias mitigation with respect to the utility-fairness trade-off, the controllability of methods, and computational costs. We show that an uncertainty-based approach can provide an intuitive and flexible trade-off that outperforms all baselines without additional training requirements, allowing for the post hoc use of this approach on top of arbitrary retrieval models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2021

Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers

Societal biases resonate in the retrieved contents of information retrie...
research
08/05/2023

Group Membership Bias

When learning to rank from user interactions, search and recommendation ...
research
06/30/2021

Fair Visual Recognition in Limited Data Regime using Self-Supervision and Self-Distillation

Deep learning models generally learn the biases present in the training ...
research
11/19/2021

Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics

This article is a companion paper to our earlier work Miroshnikov et al....
research
06/21/2021

Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification

Existing bias mitigation methods to reduce disparities in model outcomes...
research
06/13/2018

Your 2 is My 1, Your 3 is My 9: Handling Arbitrary Miscalibrations in Ratings

Cardinal scores (numeric ratings) collected from people are well known t...
research
06/22/2023

A Discrimination Report Card

We develop an Empirical Bayes grading scheme that balances the informati...

Please sign up or login with your details

Forgot password? Click here to reset