How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

01/21/2022
by   Shuai Zhang, et al.
4

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited. Despite empirical successes, its theoretical characterization remains elusive. To the best of our knowledge, this work establishes the first theoretical analysis for the known iterative self-training paradigm and proves the benefits of unlabeled data in both training convergence and generalization ability. To make our theoretical analysis feasible, we focus on the case of one-hidden-layer neural networks. However, theoretical understanding of iterative self-training is non-trivial even for a shallow neural network. One of the key challenges is that existing neural network landscape analysis built upon supervised learning no longer holds in the (semi-supervised) self-training paradigm. We address this challenge and prove that iterative self-training converges linearly with both convergence rate and generalization accuracy improved in the order of 1/√(M), where M is the number of unlabeled samples. Experiments from shallow neural networks to deep neural networks are also provided to justify the correctness of our established theoretical insights on self-training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2022

Why pseudo label based algorithm is effective? –from the perspective of pseudo labeled data

Recently, pseudo label based semi-supervised learning has achieved great...
research
10/07/2020

Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data

Self-training algorithms, which train a model to fit pseudolabels predic...
research
04/28/2023

Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics

Self-training based semi-supervised learning algorithms have enabled the...
research
02/17/2020

Convergence of End-to-End Training in Deep Unsupervised Contrasitive Learning

Unsupervised contrastive learning has gained increasing attention in the...
research
03/22/2018

Learning through deterministic assignment of hidden parameters

Supervised learning frequently boils down to determining hidden and brig...
research
05/24/2019

Robustness to Adversarial Perturbations in Learning from Incomplete Data

What is the role of unlabeled data in an inference problem, when the pre...
research
10/22/2019

Class Mean Vectors, Self Monitoring and Self Learning for Neural Classifiers

In this paper we explore the role of sample mean in building a neural ne...

Please sign up or login with your details

Forgot password? Click here to reset