Deep Learning From Crowdsourced Labels: Coupled Cross-entropy Minimization, Identifiability, and Regularization

06/05/2023
by   Shahana Ibrahim, et al.
0

Using noisy crowdsourced labels from multiple annotators, a deep learning-based end-to-end (E2E) system aims to learn the label correction mechanism and the neural classifier simultaneously. To this end, many E2E systems concatenate the neural classifier with multiple annotator-specific “label confusion” layers and co-train the two parts in a parameter-coupled manner. The formulated coupled cross-entropy minimization (CCEM)-type criteria are intuitive and work well in practice. Nonetheless, theoretical understanding of the CCEM criterion has been limited. The contribution of this work is twofold: First, performance guarantees of the CCEM criterion are presented. Our analysis reveals for the first time that the CCEM can indeed correctly identify the annotators' confusion characteristics and the desired “ground-truth” neural classifier under realistic conditions, e.g., when only incomplete annotator labeling and finite samples are available. Second, based on the insights learned from our analysis, two regularized variants of the CCEM are proposed. The regularization terms provably enhance the identifiability of the target model parameters in various more challenging cases. A series of synthetic and real data experiments are presented to showcase the effectiveness of our approach.

READ FULL TEXT

page 36

page 37

page 38

research
12/01/2022

Noisy Label Classification using Label Noise Selection with Test-Time Augmentation Cross-Entropy and NoiseMix Learning

As the size of the dataset used in deep learning tasks increases, the no...
research
03/08/2022

Trustable Co-label Learning from Multiple Noisy Annotators

Supervised deep learning depends on massive accurately annotated example...
research
08/16/2019

Symmetric Cross Entropy for Robust Learning with Noisy Labels

Training accurate deep neural networks (DNNs) in the presence of noisy l...
research
01/30/2023

Understanding Self-Distillation in the Presence of Label Noise

Self-distillation (SD) is the process of first training a teacher model ...
research
03/31/2021

Collaborative Label Correction via Entropy Thresholding

Deep neural networks (DNNs) have the capacity to fit extremely noisy lab...
research
08/10/2022

Imbalance Trouble: Revisiting Neural-Collapse Geometry

Neural Collapse refers to the remarkable structural properties character...
research
08/07/2022

Preserving Fine-Grain Feature Information in Classification via Entropic Regularization

Labeling a classification dataset implies to define classes and associat...

Please sign up or login with your details

Forgot password? Click here to reset