Mixed Autoencoder for Self-supervised Visual Representation Learning

03/30/2023
by   Kai Chen, et al.
0

Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction. However, effective data augmentation strategies for MAE still remain open questions, different from those in contrastive learning that serve as the most important part. This paper studies the prevailing mixing augmentation for MAE. We first demonstrate that naive mixing will in contrast degenerate model performance due to the increase of mutual information (MI). To address, we propose homologous recognition, an auxiliary pretext task, not only to alleviate the MI increasement by explicitly requiring each patch to recognize homologous patches, but also to perform object-aware self-supervised pre-training for better downstream dense perception performance. With extensive experiments, we demonstrate that our proposed Mixed Autoencoder (MixedAE) achieves the state-of-the-art transfer results among masked image modeling (MIM) augmentations on different downstream tasks with significant efficiency. Specifically, our MixedAE outperforms MAE by +0.3 AP on ImageNet-1K, ADE20K and COCO respectively with a standard ViT-Base. Moreover, MixedAE surpasses iBOT, a strong MIM method combined with instance discrimination, while accelerating training by 2x. To our best knowledge, this is the very first work to consider mixing for MIM from the perspective of pretext task design. Code will be made available.

READ FULL TEXT

page 3

page 4

page 5

page 15

research
04/04/2023

Multi-Level Contrastive Learning for Dense Prediction Task

In this work, we present Multi-Level Contrastive Learning for Dense Pred...
research
11/19/2020

Heterogeneous Contrastive Learning: Encoding Spatial Information for Compact Visual Representations

Contrastive learning has achieved great success in self-supervised visua...
research
05/26/2022

Task-Customized Self-Supervised Pre-training with Scalable Dynamic Routing

Self-supervised learning (SSL), especially contrastive methods, has rais...
research
06/21/2023

Inter-Instance Similarity Modeling for Contrastive Learning

The existing contrastive learning methods widely adopt one-hot instance ...
research
02/27/2023

EDMAE: An Efficient Decoupled Masked Autoencoder for Standard View Identification in Pediatric Echocardiography

We propose an efficient decoupled mask autoencoder (EDMAE) for standard ...
research
05/27/2022

Architecture-Agnostic Masked Image Modeling – From ViT back to CNN

Masked image modeling (MIM), an emerging self-supervised pre-training me...
research
08/31/2023

CL-MAE: Curriculum-Learned Masked Autoencoders

Masked image modeling has been demonstrated as a powerful pretext task f...

Please sign up or login with your details

Forgot password? Click here to reset