Masked Autoencoders are Robust Data Augmentors

06/10/2022
by   Haohang Xu, et al.
0

Deep neural networks are capable of learning powerful representations to tackle complex vision tasks but expose undesirable properties like the over-fitting issue. To this end, regularization techniques like image augmentation are necessary for deep neural networks to generalize well. Nevertheless, most prevalent image augmentation recipes confine themselves to off-the-shelf linear transformations like scale, flip, and colorjitter. Due to their hand-crafted property, these augmentations are insufficient to generate truly hard augmented examples. In this paper, we propose a novel perspective of augmentation to regularize the training process. Inspired by the recent success of applying masked image modeling to self-supervised learning, we adopt the self-supervised masked autoencoder to generate the distorted view of the input images. We show that utilizing such model-based nonlinear transformation as data augmentation can improve high-level recognition tasks. We term the proposed method as Mask-Reconstruct Augmentation (MRA). The extensive experiments on various image classification benchmarks verify the effectiveness of the proposed augmentation. Specifically, MRA consistently enhances the performance on supervised, semi-supervised as well as few-shot classification. The code will be available at <https://github.com/haohang96/MRA>.

READ FULL TEXT

page 2

page 8

research
12/05/2021

Augmentation-Free Self-Supervised Learning on Graphs

Inspired by the recent success of self-supervised methods applied on ima...
research
03/11/2023

AugDiff: Diffusion based Feature Augmentation for Multiple Instance Learning in Whole Slide Image

Multiple Instance Learning (MIL), a powerful strategy for weakly supervi...
research
02/17/2022

General Cyclical Training of Neural Networks

This paper describes the principle of "General Cyclical Training" in mac...
research
07/26/2021

AAVAE: Augmentation-Augmented Variational Autoencoders

Recent methods for self-supervised learning can be grouped into two para...
research
03/02/2023

Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning

Recent works have shown that self-supervised learning can achieve remark...
research
02/23/2022

ProFormer: Learning Data-efficient Representations of Body Movement with Prototype-based Feature Augmentation and Visual Transformers

Automatically understanding human behaviour allows household robots to i...
research
05/13/2022

Toward a Geometrical Understanding of Self-supervised Contrastive Learning

Self-supervised learning (SSL) is currently one of the premier technique...

Please sign up or login with your details

Forgot password? Click here to reset