Amortised Invariance Learning for Contrastive Self-Supervision

02/24/2023
by   Ruchika Chavhan, et al.
0

Contrastive self-supervised learning methods famously produce high quality transferable representations by learning invariances to different data augmentations. Invariances established during pre-training can be interpreted as strong inductive biases. However these may or may not be helpful, depending on if they match the invariance requirements of downstream tasks or not. This has led to several attempts to learn task-specific invariances during pre-training, however, these methods are highly compute intensive and tedious to train. We introduce the notion of amortised invariance learning for contrastive self supervision. In the pre-training stage, we parameterize the feature extractor by differentiable invariance hyper-parameters that control the invariances encoded by the representation. Then, for any downstream task, both linear readout and task-specific invariance requirements can be efficiently and effectively learned by gradient-descent. We evaluate the notion of amortised invariances for contrastive learning over two different modalities: vision and audio, on two widely-used contrastive learning methods in vision: SimCLR and MoCo-v2 with popular architectures like ResNets and Vision Transformers, and SimCLR with ResNet-18 for audio. We show that our amortised features provide a reliable way to learn diverse downstream tasks with different invariance requirements, while using a single feature and avoiding task-specific pre-training. This provides an exciting perspective that opens up new horizons in the field of general purpose representation learning.

READ FULL TEXT

page 7

page 22

page 23

research
03/25/2021

Contrasting Contrastive Self-Supervised Representation Learning Models

In the past few years, we have witnessed remarkable breakthroughs in sel...
research
10/19/2020

Improving Transformation Invariance in Contrastive Representation Learning

We propose methods to strengthen the invariance properties of representa...
research
07/17/2022

HyperInvariances: Amortizing Invariance Learning

Providing invariances in a given learning task conveys a key inductive b...
research
06/08/2023

A Crystal-Specific Pre-Training Framework for Crystal Material Property Prediction

Crystal property prediction is a crucial aspect of developing novel mate...
research
05/23/2022

Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods

Self-Supervised Learning (SSL) surmises that inputs and pairwise positiv...
research
11/02/2022

On the Informativeness of Supervision Signals

Learning transferable representations by training a classifier is a well...
research
03/16/2022

X-Learner: Learning Cross Sources and Tasks for Universal Visual Representation

In computer vision, pre-training models based on largescale supervised l...

Please sign up or login with your details

Forgot password? Click here to reset