Slimmable Networks for Contrastive Self-supervised Learning

by   Shuai Zhao, et al.

Self-supervised learning makes great progress in large model pre-training but suffers in training small models. Previous solutions to this problem mainly rely on knowledge distillation and indeed have a two-stage learning procedure: first train a large teacher model, then distill it to improve the generalization ability of small ones. In this work, we present a new one-stage solution to obtain pre-trained small models without extra teachers: slimmable networks for contrastive self-supervised learning (SlimCLR). A slimmable network contains a full network and several weight-sharing sub-networks. We can pre-train for only one time and obtain various networks including small ones with low computation costs. However, in self-supervised cases, the interference between weight-sharing networks leads to severe performance degradation. One evidence of the interference is gradient imbalance: a small proportion of parameters produces dominant gradients during backpropagation, and the main parameters may not be fully optimized. The divergence in gradient directions of various networks may also cause interference between networks. To overcome these problems, we make the main parameters produce dominant gradients and provide consistent guidance for sub-networks via three techniques: slow start training of sub-networks, online distillation, and loss re-weighting according to model sizes. Besides, a switchable linear probe layer is applied during linear evaluation to avoid the interference of weight-sharing linear layers. We instantiate SlimCLR with typical contrastive learning frameworks and achieve better performance than previous arts with fewer parameters and FLOPs.


Distilling Visual Priors from Self-Supervised Learning

Convolutional Neural Networks (CNNs) are prone to overfit small training...

On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals

It is a consensus that small models perform quite poorly under the parad...

EnSiam: Self-Supervised Learning With Ensemble Representations

Recently, contrastive self-supervised learning, where the proximity of r...

Continual Contrastive Self-supervised Learning for Image Classification

For artificial learning systems, continual learning over time from a str...

Establishing a stronger baseline for lightweight contrastive models

Recent research has reported a performance degradation in self-supervise...

Effective Self-supervised Pre-training on Low-compute networks without Distillation

Despite the impressive progress of self-supervised learning (SSL), its a...

Interference and Generalization in Temporal Difference Learning

We study the link between generalization and interference in temporal-di...

Please sign up or login with your details

Forgot password? Click here to reset