Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking

04/25/2022
by   Qian Dong, et al.
0

Passage re-ranking is to obtain a permutation over the candidate passage set from retrieval stage. Re-rankers have been boomed by Pre-trained Language Models (PLMs) due to their overwhelming advantages in natural language understanding. However, existing PLM based re-rankers may easily suffer from vocabulary mismatch and lack of domain specific knowledge. To alleviate these problems, explicit knowledge contained in knowledge graph is carefully introduced in our work. Specifically, we employ the existing knowledge graph which is incomplete and noisy, and first apply it in passage re-ranking task. To leverage a reliable knowledge, we propose a novel knowledge graph distillation method and obtain a knowledge meta graph as the bridge between query and passage. To align both kinds of embedding in the latent space, we employ PLM as text encoder and graph neural network over knowledge meta graph as knowledge encoder. Besides, a novel knowledge injector is designed for the dynamic interaction between text and knowledge encoder. Experimental results demonstrate the effectiveness of our method especially in queries requiring in-depth domain knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2022

A Survey of Knowledge-Enhanced Pre-trained Language Models

Pre-trained Language Models (PLMs) which are trained on large text corpu...
research
10/27/2022

Unsupervised Knowledge Graph Construction and Event-centric Knowledge Infusion for Scientific NLI

With the advance of natural language inference (NLI), a rising demand fo...
research
10/01/2020

CoLAKE: Contextualized Language and Knowledge Embedding

With the emerging branch of incorporating factual knowledge into pre-tra...
research
05/13/2022

Bootstrapping Text Anonymization Models with Distant Supervision

We propose a novel method to bootstrap text anonymization models based o...
research
07/06/2023

Extracting Multi-valued Relations from Language Models

The widespread usage of latent language representations via pre-trained ...
research
05/27/2021

Inspecting the concept knowledge graph encoded by modern language models

The field of natural language understanding has experienced exponential ...
research
01/26/2023

Understanding Finetuning for Factual Knowledge Extraction from Language Models

Language models (LMs) pretrained on large corpora of text from the web h...

Please sign up or login with your details

Forgot password? Click here to reset