GMM Discriminant Analysis with Noisy Label for Each Class

by   Jian-wei Liu, et al.
China University of Petroleum,

Real world datasets often contain noisy labels, and learning from such datasets using standard classification approaches may not produce the desired performance. In this paper, we propose a Gaussian Mixture Discriminant Analysis (GMDA) with noisy label for each class. We introduce flipping probability and class probability and use EM algorithms to solve the discriminant problem with label noise. We also provide the detail proofs of convergence. Experimental results on synthetic and real-world datasets show that the proposed approach notably outperforms other four state-of-art methods.


page 1

page 2

page 3

page 4


Learning with Bounded Instance- and Label-dependent Label Noise

Instance- and label-dependent label noise (ILN) is widely existed in rea...

Which Strategies Matter for Noisy Label Classification? Insight into Loss and Uncertainty

Label noise is a critical factor that degrades the generalization perfor...

Enhanced Meta Label Correction for Coping with Label Corruption

Traditional methods for learning with the presence of noisy labels have ...

Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks

We algorithmically identify label errors in the test sets of 10 of the m...

Evaluating Bayes Error Estimators on Read-World Datasets with FeeBee

The Bayes error rate (BER) is a fundamental concept in machine learning ...

Do We Need to Penalize Variance of Losses for Learning with Label Noise?

Algorithms which minimize the averaged loss have been widely designed fo...

A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

This article carries out a large dimensional analysis of standard regula...

Please sign up or login with your details

Forgot password? Click here to reset