CrowdTeacher: Robust Co-teaching with Noisy Answers Sample-specific Perturbations for Tabular Data

03/31/2021
by   Mani Sotoodeh, et al.
0

Samples with ground truth labels may not always be available in numerous domains. While learning from crowdsourcing labels has been explored, existing models can still fail in the presence of sparse, unreliable, or diverging annotations. Co-teaching methods have shown promising improvements for computer vision problems with noisy labels by employing two classifiers trained on each others' confident samples in each batch. Inspired by the idea of separating confident and uncertain samples during the training process, we extend it for the crowdsourcing problem. Our model, CrowdTeacher, uses the idea that perturbation in the input space model can improve the robustness of the classifier for noisy labels. Treating crowdsourcing annotations as a source of noisy labeling, we perturb samples based on the certainty from the aggregated annotations. The perturbed samples are fed to a Co-teaching algorithm tuned to also accommodate smaller tabular data. We showcase the boost in predictive power attained using CrowdTeacher for both synthetic and real datasets across various label density settings. Our experiments reveal that our proposed approach beats baselines modeling individual annotations and then combining them, methods simultaneously learning a classifier and inferring truth labels, and the Co-teaching algorithm with aggregated labels through common truth inference methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2018

Optimizing the Wisdom of the Crowd: Inference, Learning, and Teaching

The unprecedented demand for large amount of data has catalyzed the tren...
research
07/18/2014

Bayesian Nonparametric Crowdsourcing

Crowdsourcing has been proven to be an effective and efficient tool to a...
research
07/11/2021

Learning from Crowds with Sparse and Imbalanced Annotations

Traditional supervised learning requires ground truth labels for the tra...
research
01/02/2023

In Quest of Ground Truth: Learning Confident Models and Estimating Uncertainty in the Presence of Annotator Noise

The performance of the Deep Learning (DL) models depends on the quality ...
research
03/12/2018

Leveraging Crowdsourcing Data For Deep Active Learning - An Application: Learning Intents in Alexa

This paper presents a generic Bayesian framework that enables any deep l...
research
02/10/2019

Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion

The predictive performance of supervised learning algorithms depends on ...
research
12/29/2022

Recovering Top-Two Answers and Confusion Probability in Multi-Choice Crowdsourcing

Crowdsourcing has emerged as an effective platform to label a large volu...

Please sign up or login with your details

Forgot password? Click here to reset