Prompt-Learning for Fine-Grained Entity Typing

08/24/2021
by   Ning Ding, et al.
0

As an effective approach to tune pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on three fine-grained entity typing benchmarks (with up to 86 classes) under fully supervised, few-shot and zero-shot settings show that prompt-learning methods significantly outperform fine-tuning baselines, especially when the training data is insufficient.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/28/2022

Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation and Instance Generation

We study the problem of few-shot Fine-grained Entity Typing (FET), where...
10/06/2022

Generative Entity Typing with Curriculum Learning

Entity typing aims to assign types to the entity mentions in given texts...
08/15/2023

Synthesizing Political Zero-Shot Relation Classification via Codebook Knowledge, NLI, and ChatGPT

Recent supervised models for event coding vastly outperform pattern-matc...
04/02/2020

MZET: Memory Augmented Zero-Shot Fine-grained Named Entity Typing

Named entity typing (NET) is a classification task of assigning an entit...
12/20/2019

Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model

Recent breakthroughs of pretrained language models have shown the effect...
05/25/2023

Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts

Recent studies have demonstrated that natural-language prompts can help ...

Please sign up or login with your details

Forgot password? Click here to reset