Self-supervised Pretraining of Visual Features in the Wild

by   Priya Goyal, et al.

Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods. These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model, a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2 1 Interestingly, we also observe that self-supervised models are good few-shot learners achieving 77.9


page 1

page 2

page 3

page 4


CompRess: Self-Supervised Learning by Compressing Representations

Self-supervised learning aims to learn good representations with unlabel...

EMP-SSL: Towards Self-Supervised Learning in One Training Epoch

Recently, self-supervised learning (SSL) has achieved tremendous success...

Super-Selfish: Self-Supervised Learning on Images with PyTorch

Super-Selfish is an easy to use PyTorch framework for image-based self-s...

Visualizing and Understanding Self-Supervised Vision Learning

Self-Supervised vision learning has revolutionized deep learning, becomi...

Motion Degeneracy in Self-supervised Learning of Elevation Angle Estimation for 2D Forward-Looking Sonar

2D forward-looking sonar is a crucial sensor for underwater robotic perc...

Self-supervised learning for joint SAR and multispectral land cover classification

Self-supervised learning techniques are gaining popularity due to their ...

DINOv2: Learning Robust Visual Features without Supervision

The recent breakthroughs in natural language processing for model pretra...

Please sign up or login with your details

Forgot password? Click here to reset