One-Step Abductive Multi-Target Learning with Diverse Noisy Samples

10/20/2021
by   Yongquan Yang, et al.
0

One-step abductive multi-target learning (OSAMTL) was proposed to handle complex noisy labels. In this paper, giving definition of diverse noisy samples (DNS), we propose one-step abductive multi-target learning with DNS (OSAMTL-DNS) to expand the original OSAMTL to a wider range of tasks that handle complex noisy labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2020

Handling Noisy Labels via One-Step Abductive Multi-Target Learning

Learning from noisy labels is an important concern because of the lack o...
research
08/14/2020

The Impact of Label Noise on a Music Tagger

We explore how much can be learned from noisy labels in audio music tagg...
research
06/20/2023

LNL+K: Learning with Noisy Labels and Noise Source Distribution Knowledge

Learning with noisy labels (LNL) is challenging as the model tends to me...
research
10/28/2022

Speaker recognition with two-step multi-modal deep cleansing

Neural network-based speaker recognition has achieved significant improv...
research
05/24/2018

SOSELETO: A Unified Approach to Transfer Learning and Training with Noisy Labels

We present SOSELETO (SOurce SELEction for Target Optimization), a new me...
research
07/10/2023

Robust Feature Learning Against Noisy Labels

Supervised learning of deep neural networks heavily relies on large-scal...
research
10/18/2022

CNT (Conditioning on Noisy Targets): A new Algorithm for Leveraging Top-Down Feedback

We propose a novel regularizer for supervised learning called Conditioni...

Please sign up or login with your details

Forgot password? Click here to reset