Rethinking Curriculum Learning with Incremental Labels and Adaptive Compensation

by   Madan Ravi Ganesh, et al.

Like humans, deep networks learn better when samples are organized and introduced in a meaningful order or curriculum (Weinshall et al., 2018). While con-ventional approaches to curriculum learning emphasize the difficulty of samples as the core incremental strategy, it forces networks to learn from small subsets of data while introducing pre-computation overheads. In this work, we propose Learning with Incremental Labels and Adaptive Compensation(LILAC), which takes a novel approach to curriculum learning. LILAC emphasizes incrementally learning labels instead of incrementally learning difficult samples. It works in two distinct phases: first, in the incremental label introduction phase, we recursively reveal ground-truth labels in small installments while using a fake label for the remaining data. In the adaptive compensation phase, we compensate for failed predictions by adaptively altering the target vector to a smoother distribution. We evaluate LILAC against the closest comparable methods in batch and curriculum learning and label smoothing, across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10. We show that our method outperforms batch learning with higher mean recognition accuracy as well as lower standard deviation in performance consistently across all benchmarks. We further extend LILAC to show the highest performance on CIFAR-10 for methods using simple data augmentation while exhibiting label-order invariance among other properties.


Confidence-Aware Paced-Curriculum Learning by Label Smoothing for Surgical Scene Understanding

Curriculum learning and self-paced learning are the training strategies ...

Improving the Accuracy of Early Exits in Multi-Exit Architectures via Curriculum Learning

Deploying deep learning services for time-sensitive and resource-constra...

Less is More: Adaptive Curriculum Learning for Thyroid Nodule Diagnosis

Thyroid nodule classification aims at determining whether the nodule is ...

Curriculum Loss: Robust Learning and Generalization against Label Corruption

Generalization is vital important for many deep network models. It becom...

Fast FixMatch: Faster Semi-Supervised Learning with Curriculum Batch Size

Advances in Semi-Supervised Learning (SSL) have almost entirely closed t...

CHEF: A Cheap and Fast Pipeline for Iteratively Cleaning Label Uncertainties (Technical Report)

High-quality labels are expensive to obtain for many machine learning ta...

Unsupervised Class-Incremental Learning Through Confusion

While many works on Continual Learning have shown promising results for ...

Please sign up or login with your details

Forgot password? Click here to reset