Do Prompt-Based Models Really Understand the Meaning of their Prompts?

09/02/2021
by   Albert Webson, et al.
0

Recently, a boom of papers have shown extraordinary progress in few-shot learning with various prompt-based models. Such success can give the impression that prompts help models to learn faster in the same way that humans learn faster when provided with task instructions expressed in natural language. In this study, we experiment with over 30 prompts manually written for natural language inference (NLI). We find that models learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively "good" prompts. Additionally, we find that model performance is more dependent on the choice of the LM target words (a.k.a. the "verbalizer" that converts LM vocabulary prediction to class labels) than on the text of the prompt itself. In sum, we find little evidence that suggests existing prompt-based models truly understand the meaning of their given prompts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

The Turking Test: Can Language Models Understand Instructions?

Supervised machine learning provides the learner with a set of input-out...
research
07/01/2019

Natural Language Understanding with the Quora Question Pairs Dataset

This paper explores the task Natural Language Understanding (NLU) by loo...
research
10/23/2022

Conformal Predictor for Improving Zero-shot Text Classification Efficiency

Pre-trained language models (PLMs) have been shown effective for zero-sh...
research
10/27/2017

One-shot and few-shot learning of word embeddings

Standard deep learning systems require thousands or millions of examples...
research
01/17/2023

Are Language Models Worse than Humans at Following Prompts? It's Complicated

Prompts have been the center of progress in advancing language models' z...
research
05/26/2023

AMPERE: AMR-Aware Prefix for Generation-Based Event Argument Extraction Model

Event argument extraction (EAE) identifies event arguments and their spe...
research
02/17/2017

Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers from Vision

People can refer to quantities in a visual scene by using either exact c...

Please sign up or login with your details

Forgot password? Click here to reset