DeepAI AI Chat
Log In Sign Up

Contrastive Representation Learning with Trainable Augmentation Channel

by   Masanori Koyama, et al.
University of Oxford
Preferred Infrastructure

In contrastive representation learning, data representation is trained so that it can classify the image instances even when the images are altered by augmentations. However, depending on the datasets, some augmentations can damage the information of the images beyond recognition, and such augmentations can result in collapsed representations. We present a partial solution to this problem by formalizing a stochastic encoding process in which there exist a tug-of-war between the data corruption introduced by the augmentations and the information preserved by the encoder. We show that, with the infoMax objective based on this framework, we can learn a data-dependent distribution of augmentations to avoid the collapse of the representation.


page 2

page 4


CIPER: Combining Invariant and Equivariant Representations Using Contrastive and Predictive Learning

Self-supervised representation learning (SSRL) methods have shown great ...

Features Based Adaptive Augmentation for Graph Contrastive Learning

Self-Supervised learning aims to eliminate the need for expensive annota...

Center-wise Local Image Mixture For Contrastive Representation Learning

Recent advances in unsupervised representation learning have experienced...

A Simple Framework for Uncertainty in Contrastive Learning

Contrastive approaches to representation learning have recently shown gr...

Contrastive Learning with Consistent Representations

Contrastive learning demonstrates great promise for representation learn...

MN-Pair Contrastive Damage Representation and Clustering for Prognostic Explanation

It is essential for infrastructure managers to maintain a high standard ...

UniNL: Aligning Representation Learning with Scoring Function for OOD Detection via Unified Neighborhood Learning

Detecting out-of-domain (OOD) intents from user queries is essential for...