Mixup Without Hesitation

01/12/2021
by   Hao Yu, et al.
20

Mixup linearly interpolates pairs of examples to form new samples, which is easy to implement and has been shown to be effective in image classification tasks. However, there are two drawbacks in mixup: one is that more training epochs are needed to obtain a well-trained model; the other is that mixup requires tuning a hyper-parameter to gain appropriate capacity but that is a difficult task. In this paper, we find that mixup constantly explores the representation space, and inspired by the exploration-exploitation dilemma in reinforcement learning, we propose mixup Without hesitation (mWh), a concise, effective, and easy-to-use training algorithm. We show that mWh strikes a good balance between exploration and exploitation by gradually replacing mixup with basic data augmentation. It can achieve a strong baseline with less training time than original mixup and without searching for optimal hyper-parameter, i.e., mWh acts as mixup without hesitation. mWh can also transfer to CutMix, and gain consistent improvement on other machine learning and computer vision tasks such as object detection. Our code is open-source and available at https://github.com/yuhao318/mwh

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2019

Learning Data Augmentation Strategies for Object Detection

Data augmentation is a critical component of training deep learning mode...
research
01/07/2023

Advanced Data Augmentation Approaches: A Comprehensive Survey and Future directions

Deep learning (DL) algorithms have shown significant performance in vari...
research
01/09/2022

Invariance encoding in sliced-Wasserstein space for image classification with limited training data

Deep convolutional neural networks (CNNs) are broadly considered to be s...
research
05/28/2021

AutoSampling: Search for Effective Data Sampling Schedules

Data sampling acts as a pivotal role in training deep learning models. H...
research
10/01/2014

Learning to Transfer Privileged Information

We introduce a learning framework called learning using privileged infor...
research
03/31/2021

Scale-aware Automatic Augmentation for Object Detection

We propose Scale-aware AutoAug to learn data augmentation policies for o...
research
11/23/2021

Using mixup as regularization and tuning hyper-parameters for ResNets

While novel computer vision architectures are gaining traction, the impa...

Please sign up or login with your details

Forgot password? Click here to reset