Active Learning with Neural Networks: Insights from Nonparametric Statistics

10/15/2022
βˆ™
by   Yinglun Zhu, et al.
βˆ™
0
βˆ™

Deep neural networks have great representation power, but typically require large numbers of training examples. This motivates deep active learning methods that can significantly reduce the amount of labeled training data. Empirical successes of deep active learning have been recently reported in the literature, however, rigorous label complexity guarantees of deep active learning have remained elusive. This constitutes a significant gap between theory and practice. This paper tackles this gap by providing the first near-optimal label complexity guarantees for deep active learning. The key insight is to study deep active learning from the nonparametric classification perspective. Under standard low noise conditions, we show that active learning with neural networks can provably achieve the minimax label complexity, up to disagreement coefficient and other logarithmic terms. When equipped with an abstention option, we further develop an efficient deep active learning algorithm that achieves π—‰π—ˆπ—…π—’π—…π—ˆπ—€(1/Ο΅) label complexity, without any low noise assumptions. We also provide extensions of our results beyond the commonly studied Sobolev/HΓΆlder spaces and develop label complexity guarantees for learning in Radon 𝖑𝖡^2 spaces, which have recently been proposed as natural function spaces associated with neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset