Prediction stability as a criterion in active learning

10/27/2019
by   Junyu Liu, et al.
0

Recent breakthroughs made by deep learning rely heavily on large number of annotated samples. To overcome this shortcoming, active learning is a possible solution. Beside the previous active learning algorithms that only adopted information after training, we propose a new class of method based on the information during training, named sequential-based method. An specific criterion of active learning called prediction stability is proposed to prove the feasibility of sequential-based methods. Experiments are made on CIFAR-10 and CIFAR-100, and the results indicates that prediction stability is effective and works well on fewer-labeled datasets. Prediction stability reaches the accuracy of traditional acquisition functions like entropy on CIFAR-10, and notably outperforms them on CIFAR-100.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset