GANs for learning from very high class conditional noisy labels
We use Generative Adversarial Networks (GANs) to design a class conditional label noise (CCN) robust scheme for binary classification. It first generates a set of correctly labelled data points from noisy labelled data and 0.1 clean labels such that the generated and true (clean) labelled data distributions are close; generated labelled data is used to learn a good classifier. The mode collapse problem while generating correct feature-label pairs and the problem of skewed feature-label dimension ratio (∼ 784:1) are avoided by using Wasserstein GAN and a simple data representation change. Another WGAN with information-theoretic flavour on top of the new representation is also proposed. The major advantage of both schemes is their significant improvement over the existing ones in presence of very high CCN rates, without either estimating or cross-validating over the noise rates. We proved that KL divergence between clean and noisy distribution increases w.r.t. noise rates in symmetric label noise model; can be extended to high CCN rates. This implies that our schemes perform well due to the adversarial nature of GANs. Further, use of generative approach (learning clean joint distribution) while handling noise enables our schemes to perform better than discriminative approaches like GLC, LDMI and GCE; even when the classes are highly imbalanced. Using Friedman F test and Nemenyi posthoc test, we showed that on high dimensional binary class synthetic, MNIST and Fashion MNIST datasets, our schemes outperform the existing methods and demonstrate consistent performance across noise rates.
READ FULL TEXT