Born Again Neural Rankers

09/30/2021
by   Zhen Qin, et al.
0

We introduce Born Again neural Rankers (BAR) in the Learning to Rank (LTR) setting, where student rankers, trained in the Knowledge Distillation (KD) framework, are parameterized identically to their teachers. Unlike the existing ranking distillation work which pursues a good trade-off between performance and efficiency, BAR adapts the idea of Born Again Networks (BAN) to ranking problems and significantly improves ranking performance of students over the teacher rankers without increasing model capacity. The key differences between BAR and common distillation techniques for classification are: (1) an appropriate teacher score transformation function, and (2) a novel listwise distillation framework. Both techniques are specifically designed for ranking problems and are rarely studied in the knowledge distillation literature. Using the state-of-the-art neural ranking structure, BAR is able to push the limits of neural rankers above a recent rigorous benchmark study and significantly outperforms traditionally strong gradient boosted decision tree based models on 7 out of 9 key metrics, the first time in the literature. In addition to the strong empirical results, we give theoretical explanations on why listwise distillation is effective for neural rankers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2020

Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation

The latency of neural ranking models at query time is largely dependent ...
research
09/19/2018

Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System

We propose a novel way to train ranking models, such as recommender syst...
research
10/03/2019

On the Efficacy of Knowledge Distillation

In this paper, we present a thorough evaluation of the efficacy of knowl...
research
06/07/2023

RD-Suite: A Benchmark for Ranking Distillation

The distillation of ranking models has become an important topic in both...
research
09/19/2022

Toward Understanding Privileged Features Distillation in Learning-to-Rank

In learning-to-rank problems, a privileged feature is one that is availa...
research
03/02/2023

Distillation from Heterogeneous Models for Top-K Recommendation

Recent recommender systems have shown remarkable performance by using an...
research
07/26/2021

Text is Text, No Matter What: Unifying Text Recognition using Knowledge Distillation

Text recognition remains a fundamental and extensively researched topic ...

Please sign up or login with your details

Forgot password? Click here to reset