PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation

08/22/2022
by   Qihuang Zhong, et al.
17

Prompt-tuning, which freezes pretrained language models (PLMs) and only fine-tunes few parameters of additional soft prompt, shows competitive performance against full-parameter fine-tuning (i.e.model-tuning) when the PLM has billions of parameters, but still performs poorly in the case of smaller PLMs. Hence, prompt transfer (PoT), which initializes the target prompt with the trained prompt of similar source tasks, is recently proposed to improve over prompt-tuning. However, such a vanilla PoT approach usually achieves sub-optimal performance, as (i) the PoT is sensitive to the similarity of source-target pair and (ii) directly fine-tuning the prompt initialized with source prompt on target task might lead to catastrophic forgetting of source knowledge. In response to these problems, we propose a new metric to accurately predict the prompt transferability (regarding (i)), and a novel PoT approach (namely PANDA) that leverages the knowledge distillation technique to transfer the "knowledge" from the source prompt to the target prompt in a subtle manner and alleviate the catastrophic forgetting effectively (regarding (ii)). Furthermore, to achieve adaptive prompt transfer for each source-target pair, we use our metric to control the knowledge transfer in our PANDA approach. Extensive and systematic experiments on 189 combinations of 21 source and 9 target datasets across 5 scales of PLMs demonstrate that: 1) our proposed metric works well to predict the prompt transferability; 2) our PANDA consistently outperforms the vanilla PoT approach by 2.3 24.1 prompt-tuning can achieve competitive and even better performance than model-tuning in various PLM scales scenarios. Code and models will be released upon acceptance.

READ FULL TEXT

page 17

page 25

research
11/12/2021

On Transferability of Prompt Tuning for Natural Language Understanding

Prompt tuning (PT) is a promising parameter-efficient method to utilize ...
research
06/19/2023

Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference

Fine-tuning has been proven to be a simple and effective technique to tr...
research
08/29/2019

Learning to Transfer Learn

We propose a novel framework, learning to transfer learn (L2TL), to impr...
research
04/23/2018

Parameter Transfer Unit for Deep Neural Networks

Parameters in deep neural networks which are trained on large-scale data...
research
04/12/2017

Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning

Although neural networks are well suited for sequential transfer learnin...
research
05/01/2023

Refined Response Distillation for Class-Incremental Player Detection

Detecting players from sports broadcast videos is essential for intellig...
research
06/20/2023

On Compositionality and Improved Training of NADO

NeurAlly-Decomposed Oracle (NADO) is a powerful approach for controllabl...

Please sign up or login with your details

Forgot password? Click here to reset