Robust and On-the-fly Dataset Denoising for Image Classification

by   Jiaming Song, et al.

Memorization in over-parameterized neural networks could severely hurt generalization in the presence of mislabeled examples. However, mislabeled examples are hard to avoid in extremely large datasets collected with weak supervision. We address this problem by reasoning counterfactually about the loss distribution of examples with uniform random labels had they were trained with the real examples, and use this information to remove noisy examples from the training set. First, we observe that examples with uniform random labels have higher losses when trained with stochastic gradient descent under large learning rates. Then, we propose to model the loss distribution of the counterfactual examples using only the network parameters, which is able to model such examples with remarkable success. Finally, we propose to remove examples whose loss exceeds a certain quantile of the modeled loss distribution. This leads to On-the-fly Data Denoising (ODD), a simple yet effective algorithm that is robust to mislabeled examples, while introducing almost zero computational overhead compared to standard training. ODD is able to achieve state-of-the-art results on a wide range of datasets including real-world ones such as WebVision and Clothing1M.


page 1

page 2

page 3

page 6

page 8

page 10

page 16

page 17


Exponentiated Gradient Reweighting for Robust Training Under Label Noise and Beyond

Many learning tasks in machine learning can be viewed as taking a gradie...

Learning advisor networks for noisy image classification

In this paper, we introduced the novel concept of advisor network to add...

Regularization in neural network optimization via trimmed stochastic gradient descent with noisy label

Regularization is essential for avoiding over-fitting to training data i...

Noisy Labels Can Induce Good Representations

The current success of deep learning depends on large-scale labeled data...

Simple and Fast Group Robustness by Automatic Feature Reweighting

A major challenge to out-of-distribution generalization is reliance on s...

Graph Learning with Loss-Guided Training

Classically, ML models trained with stochastic gradient descent (SGD) ar...

Understanding Memorization from the Perspective of Optimization via Efficient Influence Estimation

Over-parameterized deep neural networks are able to achieve excellent tr...

Please sign up or login with your details

Forgot password? Click here to reset