Passage Re-ranking with BERT

01/13/2019
by   Rodrigo Nogueira, et al.
0

Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. In this paper, we describe a simple re-implementation of BERT for query-based passage re-ranking. Our system is the start of the art on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outperforming the previous state of the art by 27 code to reproduce our submission is available at https://github.com/nyu-dl/dl4marco-bert

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2019

Fine-tune BERT for Extractive Summarization

BERT, a pre-trained Transformer model, has achieved ground-breaking perf...
research
03/15/2016

Unsupervised Ranking Model for Entity Coreference Resolution

Coreference resolution is one of the first stages in deep language under...
research
04/05/2021

Rethinking Perturbations in Encoder-Decoders for Fast Training

We often use perturbations to regularize neural models. For neural encod...
research
10/27/2019

Thieves on Sesame Street! Model Extraction of BERT-based APIs

We study the problem of model extraction in natural language processing,...
research
11/12/2022

Dark patterns in e-commerce: a dataset and its baseline evaluations

Dark patterns, which are user interface designs in online services, indu...
research
04/30/2020

Exploring Contextualized Neural Language Models for Temporal Dependency Parsing

Extracting temporal relations between events and time expressions has ma...
research
01/31/2020

Pretrained Transformers for Simple Question Answering over Knowledge Graphs

Answering simple questions over knowledge graphs is a well-studied probl...

Please sign up or login with your details

Forgot password? Click here to reset