G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling

09/25/2020
by   Souradip Chakraborty, et al.
7

In the realms of computer vision, it is evident that deep neural networks perform better in a supervised setting with a large amount of labeled data. The representations learned with supervision are not only of high quality but also helps the model in enhancing its accuracy. However, the collection and annotation of a large dataset are costly and time-consuming. To avoid the same, there has been a lot of research going on in the field of unsupervised visual representation learning especially in a self-supervised setting. Amongst the recent advancements in self-supervised methods for visual recognition, in SimCLR Chen et al. shows that good quality representations can indeed be learned without explicit supervision. In SimCLR, the authors maximize the similarity of augmentations of the same image and minimize the similarity of augmentations of different images. A linear classifier trained with the representations learned using this approach yields 76.5 ImageNet ILSVRC-2012 dataset. In this work, we propose that, with the normalized temperature-scaled cross-entropy (NT-Xent) loss function (as used in SimCLR), it is beneficial to not have images of the same category in the same batch. In an unsupervised setting, the information of images pertaining to the same category is missing. We use the latent space representation of a denoising autoencoder trained on the unlabeled dataset and cluster them with k-means to obtain pseudo labels. With this apriori information we batch images, where no two images from the same category are to be found. We report comparable performance enhancements on the CIFAR10 dataset and a subset of the ImageNet dataset. We refer to our method as G-SimCLR.

READ FULL TEXT
research
07/13/2020

Whitening for Self-Supervised Representation Learning

Recent literature on self-supervised learning is based on the contrastiv...
research
03/19/2021

Self-Supervised Classification Network

We present Self-Classifier – a novel self-supervised end-to-end classifi...
research
05/18/2023

Tuned Contrastive Learning

In recent times, contrastive learning based loss functions have become i...
research
05/10/2020

Supervision and Source Domain Impact on Representation Learning: A Histopathology Case Study

As many algorithms depend on a suitable representation of data, learning...
research
12/08/2020

CASTing Your Model: Learning to Localize Improves Self-Supervised Representations

Recent advances in self-supervised learning (SSL) have largely closed th...
research
04/07/2023

Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning

Linear probing (LP) (and k-NN) on the upstream dataset with labels (e.g....
research
08/27/2019

Self-Supervised Representation Learning via Neighborhood-Relational Encoding

In this paper, we propose a novel self-supervised representation learnin...

Please sign up or login with your details

Forgot password? Click here to reset