On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data

by   Nan Lu, et al.

Empirical risk minimization (ERM), with proper loss function and regularization, is the common practice of supervised classification. In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM but not by clustering in the geometric space. A two-step ERM is considered: first an unbiased risk estimator is designed, and then the empirical training risk is minimized. This approach is advantageous in that we can also evaluate the empirical validation risk, which is indispensable for hyperparameter tuning when some validation data is split from U training data instead of labeled test data. We prove that designing such an estimator is impossible given a single set of U data, but it becomes possible given two sets of U data with different class priors. This answers a fundamental question in weakly-supervised learning, namely what the minimal supervision is for training any binary classifier from only U data. Since the proposed learning method is based on unbiased risk estimates, the asymptotic consistency of the learned classifier is certainly guaranteed. Experiments demonstrate that the proposed method could successfully train deep models like ResNet and outperform state-of-the-art methods for learning from two sets of U data.


Classification from Pairwise Similarity and Unlabeled Data

One of the biggest bottlenecks in supervised learning is its high labeli...

Learning from Multiple Unlabeled Datasets with Partial Risk Regularization

Recent years have witnessed a great success of supervised deep learning,...

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification

To cope with high annotation costs, training a classifier only from weak...

Classification from Triplet Comparison Data

Learning from triplet comparison data has been extensively studied in th...

Pointwise Binary Classification with Pairwise Confidence Comparisons

Ordinary (pointwise) binary classification aims to learn a binary classi...

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach

From two unlabeled (U) datasets with different class priors, we can trai...

An Unbiased Risk Estimator for Learning with Augmented Classes

In this paper, we study the problem of learning with augmented classes (...

Please sign up or login with your details

Forgot password? Click here to reset