Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning

03/06/2023
by   Zhen Wang, et al.
0

Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods typically learn soft prompt vectors from scratch, and it has not been clear how to exploit the rich cross-task knowledge with prompt vectors in a multitask learning setting. We propose multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts. We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task. Extensive experiments on 23 NLP datasets demonstrate that our proposed approach outperforms the state-of-the-art methods, including the full finetuning baseline in some cases, despite only tuning 0.035 task-specific parameters.

READ FULL TEXT
research
11/21/2022

Multitask Vision-Language Prompt Tuning

Prompt Tuning, conditioning on task-specific learned prompt vectors, has...
research
07/01/2023

Improving Multitask Retrieval by Promoting Task Specialization

In multitask retrieval, a single retriever is trained to retrieve releva...
research
06/29/2021

Zoo-Tuning: Adaptive Transfer from a Zoo of Models

With the development of deep networks on various large-scale datasets, a...
research
03/08/2023

Provable Pathways: Learning Multiple Tasks over Multiple Paths

Constructing useful representations across a large number of tasks is a ...
research
10/23/2022

Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning

Prompt tuning approaches, which learn task-specific soft prompts for a d...
research
12/10/2021

Pruning Pretrained Encoders with a Multitask Objective

The sizes of pretrained language models make them challenging and expens...
research
08/15/2019

Multitask and Transfer Learning for Autotuning Exascale Applications

Multitask learning and transfer learning have proven to be useful in the...

Please sign up or login with your details

Forgot password? Click here to reset