Empirical Perspectives on One-Shot Semi-supervised Learning

04/08/2020
by   Leslie N. Smith, et al.
0

One of the greatest obstacles in the adoption of deep neural networks for new applications is that training the network typically requires a large number of manually labeled training samples. We empirically investigate the scenario where one has access to large amounts of unlabeled data but require labeling only a single prototypical sample per class in order to train a deep network (i.e., one-shot semi-supervised learning). Specifically, we investigate the recent results reported in FixMatch for one-shot semi-supervised learning to understand the factors that affect and impede high accuracies and reliability for one-shot semi-supervised learning of Cifar-10. For example, we discover that one barrier to one-shot semi-supervised learning for high-performance image classification is the unevenness of class accuracy during the training. These results point to solutions that might enable more widespread adoption of one-shot semi-supervised training methods for new applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2020

FROST: Faster and more Robust One-shot Semi-supervised Training

Recent advances in one-shot semi-supervised learning have lowered the ba...
research
06/16/2020

Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised Performance

Reaching the performance of fully supervised learning with unlabeled dat...
research
03/18/2021

MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels

Supervised learning techniques are at the center of many tasks in remote...
research
07/07/2022

FastHebb: Scaling Hebbian Training of Deep Neural Networks to ImageNet Level

Learning algorithms for Deep Neural Networks are typically based on supe...
research
06/13/2019

Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training

We introduce the use of Bayesian optimal experimental design techniques ...
research
05/28/2021

Noised Consistency Training for Text Summarization

Neural abstractive summarization methods often require large quantities ...
research
11/12/2017

Semi-Supervised Learning via New Deep Network Inversion

We exploit a recently derived inversion scheme for arbitrary deep neural...

Please sign up or login with your details

Forgot password? Click here to reset