Mixture of Self-Supervised Learning

by   Aristo Renaldo Ruslim, et al.

Self-supervised learning is popular method because of its ability to learn features in images without using its labels and is able to overcome limited labeled datasets used in supervised learning. Self-supervised learning works by using a pretext task which will be trained on the model before being applied to a specific task. There are some examples of pretext tasks used in self-supervised learning in the field of image recognition, namely rotation prediction, solving jigsaw puzzles, and predicting relative positions on image. Previous studies have only used one type of transformation as a pretext task. This raises the question of how it affects if more than one pretext task is used and to use a gating network to combine all pretext tasks. Therefore, we propose the Gated Self-Supervised Learning method to improve image classification which use more than one transformation as pretext task and uses the Mixture of Expert architecture as a gating network in combining each pretext task so that the model automatically can study and focus more on the most useful augmentations for classification. We test performance of the proposed method in several scenarios, namely CIFAR imbalance dataset classification, adversarial perturbations, Tiny-Imagenet dataset classification, and semi-supervised learning. Moreover, there are Grad-CAM and T-SNE analysis that are used to see the proposed method for identifying important features that influence image classification and representing data for each class and separating different classes properly. Our code is in https://github.com/aristorenaldo/G-SSL


page 6

page 17


Gated Self-supervised Learning For Improving Supervised Learning

In past research on self-supervised learning for image classification, t...

Self-Supervised Learning to Guide Scientifically Relevant Categorization of Martian Terrain Images

Automatic terrain recognition in Mars rover images is an important probl...

Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning

Self-supervised learning (SSL) using mixed images has been studied to le...

Ablation study of self-supervised learning for image classification

This project focuses on the self-supervised training of convolutional ne...

Visualizing and Understanding Self-Supervised Vision Learning

Self-Supervised vision learning has revolutionized deep learning, becomi...

BinImg2Vec: Augmenting Malware Binary Image Classification with Data2Vec

Rapid digitalisation spurred by the Covid-19 pandemic has resulted in mo...

Please sign up or login with your details

Forgot password? Click here to reset