On the Importance of Distractors for Few-Shot Classification

by   Rajshekhar Das, et al.

Few-shot classification aims at classifying categories of a novel task by learning from just a few (typically, 1 to 5) labelled examples. An effective approach to few-shot classification involves a prior model trained on a large-sample base domain, which is then finetuned over the novel few-shot task to yield generalizable representations. However, task-specific finetuning is prone to overfitting due to the lack of enough training examples. To alleviate this issue, we propose a new finetuning approach based on contrastive learning that reuses unlabelled examples from the base domain in the form of distractors. Unlike the nature of unlabelled data used in prior works, distractors belong to classes that do not overlap with the novel categories. We demonstrate for the first time that inclusion of such distractors can significantly boost few-shot generalization. Our technical novelty includes a stochastic pairing of examples sharing the same category in the few-shot task and a weighting term that controls the relative influence of task-specific negatives and distractors. An important aspect of our finetuning objective is that it is agnostic to distractor labels and hence applicable to various base domain settings. Compared to state-of-the-art approaches, our method shows accuracy gains of up to 12% in cross-domain and up to 5% in unsupervised prior-learning settings.


page 1

page 12


Dense Classification and Implanting for Few-Shot Learning

Training deep neural networks from few examples is a highly challenging ...

Few-Shot Learning with Geometric Constraints

In this article, we consider the problem of few-shot learning for classi...

Few-Shot Object Detection in Unseen Domains

Few-shot object detection (FSOD) has thrived in recent years to learn no...

FLAT: Few-Shot Learning via Autoencoding Transformation Regularizers

One of the most significant challenges facing a few-shot learning task i...

CoNAL: Anticipating Outliers with Large Language Models

In many task settings, text classification models are likely to encounte...

Exploiting Unsupervised Inputs for Accurate Few-Shot Classification

In few-shot classification, the aim is to learn models able to discrimin...

From Labels to Priors in Capsule Endoscopy: A Prior Guided Approach for Improving Generalization with Few Labels

The lack of generalizability of deep learning approaches for the automat...

Please sign up or login with your details

Forgot password? Click here to reset