Mixture-Based Correction for Position and Trust Bias in Counterfactual Learning to Rank

08/19/2021
by   Ali Vardasbi, et al.
0

In counterfactual learning to rank (CLTR) user interactions are used as a source of supervision. Since user interactions come with bias, an important focus of research in this field lies in developing methods to correct for the bias of interactions. Inverse propensity scoring (IPS) is a popular method suitable for correcting position bias. Affine correction (AC) is a generalization of IPS that corrects for position bias and trust bias. IPS and AC provably remove bias, conditioned on an accurate estimation of the bias parameters. Estimating the bias parameters, in turn, requires an accurate estimation of the relevance probabilities. This cyclic dependency introduces practical limitations in terms of sensitivity, convergence and efficiency. We propose a new correction method for position and trust bias in CLTR in which, unlike the existing methods, the correction does not rely on relevance estimation. Our proposed method, mixture-based correction (MBC), is based on the assumption that the distribution of the CTRs over the items being ranked is a mixture of two distributions: the distribution of CTRs for relevant items and the distribution of CTRs for non-relevant items. We prove that our method is unbiased. The validity of our proof is not conditioned on accurate bias parameter estimation. Our experiments show that MBC, when used in different bias settings and accompanied by different LTR algorithms, outperforms AC, the state-of-the-art method for correcting position and trust bias, in some settings, while performing on par in other settings. Furthermore, MBC is orders of magnitude more efficient than AC in terms of the training time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2020

When Inverse Propensity Scoring does not Work: Affine Corrections for Unbiased Learning to Rank

Besides position bias, which has been well-studied, trust bias is anothe...
research
01/29/2020

Correcting for Selection Bias in Learning-to-rank Systems

Click data collected by modern recommendation systems are an important s...
research
11/25/2021

Unbiased Pairwise Learning to Rank in Recommender Systems

Nowadays, recommender systems already impact almost every facet of peopl...
research
02/10/2020

Towards Mixture Proportion Estimation without Irreducibility

Mixture proportion estimation (MPE) is a fundamental problem of practica...
research
10/23/2020

Unbiased Estimation Equation under f-Separable Bregman Distortion Measures

We discuss unbiased estimation equations in a class of objective functio...
research
05/01/2023

On the Impact of Outlier Bias on User Clicks

User interaction data is an important source of supervision in counterfa...
research
06/24/2022

Reaching the End of Unbiasedness: Uncovering Implicit Limitations of Click-Based Learning to Rank

Click-based learning to rank (LTR) tackles the mismatch between click fr...

Please sign up or login with your details

Forgot password? Click here to reset