Leveraging the Third Dimension in Contrastive Learning

01/27/2023
by   Sumukh Aithal, et al.
0

Self-Supervised Learning (SSL) methods operate on unlabeled data to learn robust representations useful for downstream tasks. Most SSL methods rely on augmentations obtained by transforming the 2D image pixel map. These augmentations ignore the fact that biological vision takes place in an immersive three-dimensional, temporally contiguous environment, and that low-level biological vision relies heavily on depth cues. Using a signal provided by a pretrained state-of-the-art monocular RGB-to-depth model (the Depth Prediction Transformer, Ranftl et al., 2021), we explore two distinct approaches to incorporating depth signals into the SSL framework. First, we evaluate contrastive learning using an RGB+depth input representation. Second, we use the depth signal to generate novel views from slightly different camera positions, thereby producing a 3D augmentation for contrastive learning. We evaluate these two approaches on three different SSL methods – BYOL, SimSiam, and SwAV – using ImageNette (10 class subset of ImageNet), ImageNet-100 and ImageNet-1k datasets. We find that both approaches to incorporating depth signals improve the robustness and generalization of the baseline SSL methods, though the first approach (with depth-channel concatenation) is superior. For instance, BYOL with the additional depth channel leads to an increase in downstream classification accuracy from 85.3% to 88.0% on ImageNette and 84.1% to 87.0% on ImageNet-C.

READ FULL TEXT

page 2

page 3

page 4

page 16

research
05/17/2021

Divide and Contrast: Self-supervised Learning from Uncurated Data

Self-supervised learning holds promise in leveraging large amounts of un...
research
10/13/2022

LEAVES: Learning Views for Time-Series Data in Contrastive Learning

Contrastive learning, a self-supervised learning method that can learn r...
research
02/28/2022

Understanding Contrastive Learning Requires Incorporating Inductive Biases

Contrastive learning is a popular form of self-supervised learning that ...
research
12/25/2020

Evolution Is All You Need: Phylogenetic Augmentation for Contrastive Learning

Self-supervised representation learning of biological sequence embedding...
research
11/18/2022

Improving Pixel-Level Contrastive Learning by Leveraging Exogenous Depth Information

Self-supervised representation learning based on Contrastive Learning (C...
research
10/13/2020

Are all negatives created equal in contrastive instance discrimination?

Self-supervised learning has recently begun to rival supervised learning...
research
04/20/2023

Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

Masked Image Modeling (MIM) methods, like Masked Autoencoders (MAE), eff...

Please sign up or login with your details

Forgot password? Click here to reset