How Does Mixup Help With Robustness and Generalization?

10/09/2020
by   Linjun Zhang, et al.
7

Mixup is a popular data augmentation technique based on taking convex combinations of pairs of examples and their labels. This simple technique has been shown to substantially improve both the robustness and the generalization of the trained model. However, it is not well-understood why such improvement occurs. In this paper, we provide theoretical analysis to demonstrate how using Mixup in training helps model robustness and generalization. For robustness, we show that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss. This explains why models obtained by Mixup training exhibits robustness to several kinds of adversarial attacks such as Fast Gradient Sign Method (FGSM). For generalization, we prove that Mixup augmentation corresponds to a specific type of data-adaptive regularization which reduces overfitting. Our analysis provides new insights and a framework to understand Mixup.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2019

Understanding Adversarial Robustness Through Loss Landscape Geometries

The pursuit of explaining and improving generalization in deep learning ...
research
07/09/2020

Boundary thickness and robustness in learning models

Robustness of machine learning models to various adversarial and non-adv...
research
06/19/2020

A general framework for defining and optimizing robustness

Robustness of neural networks has recently attracted a great amount of i...
research
06/10/2020

On Mixup Regularization

Mixup is a data augmentation technique that creates new examples as conv...
research
06/24/2021

On the (Un-)Avoidability of Adversarial Examples

The phenomenon of adversarial examples in deep learning models has cause...
research
05/20/2018

Improving Adversarial Robustness by Data-Specific Discretization

A recent line of research proposed (either implicitly or explicitly) gra...
research
12/04/2020

Kernel-convoluted Deep Neural Networks with Data Augmentation

The Mixup method (Zhang et al. 2018), which uses linearly interpolated d...

Please sign up or login with your details

Forgot password? Click here to reset