Interactive Label Cleaning with Example-based Explanations

by   Stefano Teso, et al.

We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look “suspicious” to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose Cincer, a novel approach that cleans both new and past data by identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, Cincer identifies a counter-example in the training set that – according to the model – is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as explanations of the model' suspicion, and highly influential, so to convey as much information as possible if relabeled. Cincer achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps acquiring substantially better data and models, especially when paired with our FIM approximation.


page 2

page 14


Explaining with Counter Visual Attributes and Examples

In this paper, we aim to explain the decisions of neural networks by uti...

Estimating Training Data Influence by Tracking Gradient Descent

We introduce a method called TrackIn that computes the influence of a tr...

Boosting in the presence of label noise

Boosting is known to be sensitive to label noise. We studied two approac...

Machine Guides, Human Supervises: Interactive Learning with Global Explanations

We introduce explanatory guided learning (XGL), a novel interactive lear...

Revisiting Methods for Finding Influential Examples

Several instance-based explainability methods for finding influential tr...

Efficient Data-Dependent Learnability

The predictive normalized maximum likelihood (pNML) approach has recentl...

An end-to-end approach for the verification problem: learning the right distance

In this contribution, we augment the metric learning setting by introduc...

Please sign up or login with your details

Forgot password? Click here to reset