Supervised Contrastive Learning Approach for Contextual Ranking

07/07/2022
by   Abhijit Anand, et al.
0

Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine tuning. This paper proposes a simple yet effective method to improve ranking performance on smaller datasets using supervised contrastive learning for the document ranking problem. We perform data augmentation by creating training data using parts of the relevant documents in the query-document pairs. We then use a supervised contrastive learning objective to learn an effective ranking model from the augmented dataset. Our experiments on subsets of the TREC-DL dataset show that, although data augmentation leads to an increasing the training data sizes, it does not necessarily improve the performance using existing pointwise or pairwise training objectives. However, our proposed supervised contrastive loss objective leads to performance improvements over the standard non-augmented setting showcasing the utility of data augmentation using contrastive losses. Finally, we show the real benefit of using supervised contrastive learning objectives by showing marked improvements in smaller ranking datasets relating to news (Robust04), finance (FiQA), and scientific fact checking (SciFact).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2021

Contrastive Fine-tuning Improves Robustness for Neural Rankers

The performance of state-of-the-art neural rankers can deteriorate subst...
research
06/21/2021

Learning to Rank Question Answer Pairs with Bilateral Contrastive Data Augmentation

In this work, we propose a novel and easy-to-apply data augmentation str...
research
11/03/2020

Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning

State-of-the-art natural language understanding classification models fo...
research
04/28/2017

Neural Ranking Models with Weak Supervision

Despite the impressive improvements achieved by unsupervised deep neural...
research
04/06/2023

Noise-Robust Dense Retrieval via Contrastive Alignment Post Training

The success of contextual word representations and advances in neural in...
research
10/24/2022

Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices

Behaviour biometrics are being explored as a viable alternative to overc...
research
01/08/2023

InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers

We carried out a reproducibility study of InPars recipe for unsupervised...

Please sign up or login with your details

Forgot password? Click here to reset