Evaluating the Robustness of Discrete Prompts

02/11/2023
by   Yoichi Ishibashi, et al.
0

Discrete prompts have been used for fine-tuning Pre-trained Language Models for diverse NLP tasks. In particular, automatic methods that generate discrete prompts from a small set of training instances have reported superior performance. However, a closer look at the learnt prompts reveals that they contain noisy and counter-intuitive lexical constructs that would not be encountered in manually-written prompts. This raises an important yet understudied question regarding the robustness of automatically learnt discrete prompts when used in downstream tasks. To address this question, we conduct a systematic study of the robustness of discrete prompts by applying carefully designed perturbations into an application using AutoPrompt and then measure their performance in two Natural Language Inference (NLI) datasets. Our experimental results show that although the discrete prompt-based method remains relatively robust against perturbations to NLI inputs, they are highly sensitive to other types of perturbations such as shuffling and deletion of prompt tokens. Moreover, they generalize poorly across different NLI datasets. We hope our findings will inspire future work on robust discrete prompt learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2021

Evaluating the Robustness of Neural Language Models to Input Perturbations

High-performance neural language models have obtained state-of-the-art r...
research
09/20/2023

Are Large Language Models Really Robust to Word-Level Perturbations?

The swift advancement in the scale and capabilities of Large Language Mo...
research
07/28/2022

An Interpretability Evaluation Benchmark for Pre-trained Language Models

While pre-trained language models (LMs) have brought great improvements ...
research
11/15/2022

GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective

Pre-trained language models (PLMs) are known to improve the generalizati...
research
02/15/2023

Measuring the Instability of Fine-Tuning

Fine-tuning pre-trained language models on downstream tasks with varying...
research
11/20/2021

Discrete Representations Strengthen Vision Transformer Robustness

Vision Transformer (ViT) is emerging as the state-of-the-art architectur...
research
05/19/2019

What Do Adversarially Robust Models Look At?

In this paper, we address the open question: "What do adversarially robu...

Please sign up or login with your details

Forgot password? Click here to reset