Learning from Multiple Unlabeled Datasets with Partial Risk Regularization

by   Yuting Tang, et al.

Recent years have witnessed a great success of supervised deep learning, where predictive models were trained from a large amount of fully labeled data. However, in practice, labeling such big data can be very costly and may not even be possible for privacy reasons. Therefore, in this paper, we aim to learn an accurate classifier without any class labels. More specifically, we consider the case where multiple sets of unlabeled data and only their class priors, i.e., the proportions of each class, are available. Under this problem setup, we first derive an unbiased estimator of the classification risk that can be estimated from the given unlabeled sets and theoretically analyze the generalization error of the learned classifier. We then find that the classifier obtained as such tends to cause overfitting as its empirical risks go negative during training. To prevent overfitting, we further propose a partial risk regularization that maintains the partial risks with respect to unlabeled datasets and classes to certain levels. Experiments demonstrate that our method effectively mitigates overfitting and outperforms state-of-the-art methods for learning from multiple unlabeled sets.


page 1

page 2

page 3

page 4


Positive-Unlabeled Learning with Non-Negative Risk Estimator

From only positive (P) and unlabeled (U) data, a binary classifier could...

On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data

Empirical risk minimization (ERM), with proper loss function and regular...

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach

From two unlabeled (U) datasets with different class priors, we can trai...

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification

To cope with high annotation costs, training a classifier only from weak...

SCRIB: Set-classifier with Class-specific Risk Bounds for Blackbox Models

Despite deep learning (DL) success in classification problems, DL classi...

Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision

Training a classifier exploiting a huge amount of supervised data is exp...

Data, Assemble: Leveraging Multiple Datasets with Heterogeneous and Partial Labels

The success of deep learning relies heavily on large datasets with exten...

Please sign up or login with your details

Forgot password? Click here to reset