DeepAI AI Chat
Log In Sign Up

Revisiting Automated Prompting: Are We Actually Doing Better?

by   Yulin Zhou, et al.
University of Cambridge
University of Oxford
Imperial College London

Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompts. Our work suggests that, in addition to fine-tuning, manual prompts should be used as a baseline in this line of research.


page 1

page 2

page 3

page 4


STT: Soft Template Tuning for Few-Shot Adaptation

Prompt tuning has been an extremely effective tool to adapt a pre-traine...

MerA: Merging Pretrained Adapters For Few-Shot Learning

Adapter tuning, which updates only a few parameters, has become a mainst...

Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners

In-context learning, which offers substantial advantages over fine-tunin...

GPS: Genetic Prompt Search for Efficient Few-shot Learning

Prompt-based techniques have demostrated great potential for improving t...

Towards Few-Shot Fact-Checking via Perplexity

Few-shot learning has drawn researchers' attention to overcome the probl...

Clickbait Detection via Large Language Models

Clickbait, which aims to induce users with some surprising and even thri...

Nearest Neighbour Few-Shot Learning for Cross-lingual Classification

Even though large pre-trained multilingual models (e.g. mBERT, XLM-R) ha...