Prototypical Verbalizer for Prompt-based Few-shot Tuning

by   Ganqu Cui, et al.
Tsinghua University

Prompt-based tuning for pre-trained language models (PLMs) has shown its effectiveness in few-shot learning. Typically, prompt-based tuning wraps the input text into a cloze question. To make predictions, the model maps the output words to labels via a verbalizer, which is either manually designed or automatically built. However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging.In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data. Specifically, ProtoVerb learns prototype vectors as verbalizers by contrastive learning. In this way, the prototypes summarize training instances and are able to enclose rich class-level semantics. We conduct experiments on both topic classification and entity typing tasks, and the results demonstrate that ProtoVerb significantly outperforms current automatic verbalizers, especially when training data is extremely scarce. More surprisingly, ProtoVerb consistently boosts prompt-based tuning even on untuned PLMs, indicating an elegant non-tuning way to utilize PLMs. Our codes are avaliable at


page 1

page 2

page 3

page 4


Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation and Instance Generation

We study the problem of few-shot Fine-grained Entity Typing (FET), where...

Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models

Prompting, which casts downstream applications as language modeling task...

Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer

Recent advances on prompt-tuning cast few-shot classification tasks as a...

PTR: Prompt Tuning with Rules for Text Classification

Fine-tuned pre-trained language models (PLMs) have achieved awesome perf...

Evolutionary Verbalizer Search for Prompt-based Few Shot Text Classification

Recent advances for few-shot text classification aim to wrap textual inp...

Exploring Vision-Language Models for Imbalanced Learning

Vision-Language models (VLMs) that use contrastive language-image pre-tr...

GPS: Genetic Prompt Search for Efficient Few-shot Learning

Prompt-based techniques have demostrated great potential for improving t...

Please sign up or login with your details

Forgot password? Click here to reset