How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot Learning?

02/18/2022
by   Yiyi Zhang, et al.
3

Cross-domain few-shot learning (CDFSL) remains a largely unsolved problem in the area of computer vision, while self-supervised learning presents a promising solution. Both learning methods attempt to alleviate the dependency of deep networks on the requirement of large-scale labeled data. Although self-supervised methods have recently advanced dramatically, their utility on CDFSL is relatively unexplored. In this paper, we investigate the role of self-supervised representation learning in the context of CDFSL via a thorough evaluation of existing methods. It comes as a surprise that even with shallow architectures or small training datasets, self-supervised methods can perform favorably compared to the existing SOTA methods. Nevertheless, no single self-supervised approach dominates all datasets indicating that existing self-supervised methods are not universally applicable. In addition, we find that representations extracted from self-supervised methods exhibit stronger robustness than the supervised method. Intriguingly, whether self-supervised representations perform well on the source domain has little correlation with their applicability on the target domain. As part of our study, we conduct an objective measurement of the performance for six kinds of representative classifiers. The results suggest Prototypical Classifier as the standard evaluation recipe for CDFSL.

READ FULL TEXT
research
01/19/2021

Cross-domain few-shot learning with unlabelled data

Few shot learning aims to solve the data scarcity problem. If there is a...
research
09/07/2023

CDFSL-V: Cross-Domain Few-Shot Learning for Videos

Few-shot video action recognition is an effective approach to recognizin...
research
07/11/2022

A clinically motivated self-supervised approach for content-based image retrieval of CT liver images

Deep learning-based approaches for content-based image retrieval (CBIR) ...
research
06/16/2023

UTOPIA: Unconstrained Tracking Objects without Preliminary Examination via Cross-Domain Adaptation

Multiple Object Tracking (MOT) aims to find bounding boxes and identitie...
research
10/21/2021

Self-Supervised Visual Representation Learning Using Lightweight Architectures

In self-supervised learning, a model is trained to solve a pretext task,...
research
11/18/2022

Weighted Ensemble Self-Supervised Learning

Ensembling has proven to be a powerful technique for boosting model perf...
research
06/23/2023

Bring Your Own Data! Self-Supervised Evaluation for Large Language Models

With the rise of Large Language Models (LLMs) and their ubiquitous deplo...

Please sign up or login with your details

Forgot password? Click here to reset