PTR: Prompt Tuning with Rules for Text Classification

by   Xu Han, et al.

Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream task. Prompt tuning has achieved promising results on some few-class classification tasks such as sentiment classification and natural language inference. However, manually designing lots of language prompts is cumbersome and fallible. For those auto-generated prompts, it is also expensive and time-consuming to verify their effectiveness in non-few-shot scenarios. Hence, it is challenging for prompt tuning to address many-class classification tasks. To this end, we propose prompt tuning with rules (PTR) for many-class text classification, and apply logic rules to construct prompts with several sub-prompts. In this way, PTR is able to encode prior knowledge of each class into prompt tuning. We conduct experiments on relation classification, a typical many-class classification task, and the results on benchmarks show that PTR can significantly and consistently outperform existing state-of-the-art baselines. This indicates that PTR is a promising approach to take advantage of PLMs for those complicated classification tasks.


page 1

page 2

page 3

page 4


Towards Unified Prompt Tuning for Few-shot Text Classification

Prompt-based fine-tuning has boosted the performance of Pre-trained Lang...

A Few-shot Approach to Resume Information Extraction via Prompts

Prompt learning has been shown to achieve near-Fine-tune performance in ...

Learning Label Modular Prompts for Text Classification in the Wild

Machine learning models usually assume i.i.d data during training and te...

Uncertainty-Aware Reliable Text Classification

Deep neural networks have significantly contributed to the success in pr...

Prototypical Verbalizer for Prompt-based Few-shot Tuning

Prompt-based tuning for pre-trained language models (PLMs) has shown its...

Transfer Learning Robustness in Multi-Class Categorization by Fine-Tuning Pre-Trained Contextualized Language Models

This study compares the effectiveness and robustness of multi-class cate...

Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks

Large pretrained Transformer-based language models like BERT and GPT hav...

Please sign up or login with your details

Forgot password? Click here to reset