Improving and Simplifying Pattern Exploiting Training

03/22/2021
by   Derek Tam, et al.
8

Recently, pre-trained language models (LMs) have achieved strong performance when fine-tuned on difficult benchmarks like SuperGLUE. However, performance can suffer when there are very few labeled examples available for fine-tuning. Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning. However, PET uses task-specific unlabeled data. In this paper, we focus on few shot learning without any unlabeled data and introduce ADAPET, which modifies PET's objective to provide denser supervision during fine-tuning. As a result, ADAPET outperforms PET on SuperGLUE without any task-specific unlabeled data. Our code can be found at https://github.com/rrmenon10/ADAPET.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2021

LiST: Lite Self-training Makes Efficient Few-shot Learners

We present a new method LiST for efficient fine-tuning of large pre-trai...
research
09/13/2021

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

Despite their recent successes in tackling many NLP tasks, large-scale p...
research
06/19/2023

Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language Understanding

State-of-the-art few-shot learning (FSL) methods leverage prompt-based f...
research
04/13/2023

Task Adaptive Feature Transformation for One-Shot Learning

We introduce a simple non-linear embedding adaptation layer, which is fi...
research
06/13/2023

Few-shot learning for sentence pair classification and its applications in software engineering

Few-shot learning-the ability to train models with access to limited dat...
research
06/08/2020

Ensemble Model with Batch Spectral Regularization and Data Blending for Cross-Domain Few-Shot Learning with Unlabeled Data

Deep learning models are difficult to obtain good performance when data ...
research
04/03/2022

PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models

Current methods for few-shot fine-tuning of pretrained masked language m...

Please sign up or login with your details

Forgot password? Click here to reset