Effective Structured Prompting by Meta-Learning and Representative Verbalizer

06/01/2023
by   Weisen Jiang, et al.
0

Prompt tuning for pre-trained masked language models (MLM) has shown promising performance in natural language processing tasks with few labeled examples. It tunes a prompt for the downstream task, and a verbalizer is used to bridge the predicted token and label prediction. Due to the limited training data, prompt initialization is crucial for prompt tuning. Recently, MetaPrompting (Hou et al., 2022) uses meta-learning to learn a shared initialization for all task-specific prompts. However, a single initialization is insufficient to obtain good prompts for all tasks and samples when the tasks are complex. Moreover, MetaPrompting requires tuning the whole MLM, causing a heavy burden on computation and memory as the MLM is usually large. To address these issues, we use a prompt pool to extract more task knowledge and construct instance-dependent prompts via attention. We further propose a novel soft verbalizer (RepVerb) which constructs label embedding from feature embeddings directly. Combining meta-learning the prompt pool and RepVerb, we propose MetaPrompter for effective structured prompting. MetaPrompter is parameter-efficient as only the pool is required to be tuned. Experimental results demonstrate that MetaPrompter performs better than the recent state-of-the-arts and RepVerb outperforms existing soft verbalizers.

READ FULL TEXT
research
05/25/2022

Learning a Better Initialization for Soft Prompts via Meta-Learning

Prompt tuning (PT) is an effective approach to adapting pre-trained lang...
research
03/12/2023

Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models

Prompt tuning, a recently emerging paradigm, enables the powerful vision...
research
10/29/2022

STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification

The effectiveness of prompt learning has been demonstrated in different ...
research
02/16/2023

Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?

Prompt tuning (PT) which only tunes the embeddings of an additional sequ...
research
09/23/2022

MetaPrompting: Learning to Learn Better Prompts

Prompting method is regarded as one of the crucial progress for few-shot...
research
06/27/2022

Leveraging Language for Accelerated Learning of Tool Manipulation

Robust and generalized tool manipulation requires an understanding of th...
research
07/19/2020

Meta-learning for Few-shot Natural Language Processing: A Survey

Few-shot natural language processing (NLP) refers to NLP tasks that are ...

Please sign up or login with your details

Forgot password? Click here to reset