Active Learning with Siamese Twins for Sequence Tagging

11/01/2019
by   Rishi Hazra, et al.
15

Deep learning, in general, and natural language processing methods, in particular, rely heavily on annotated samples to achieve good performance. However, manually annotating data is expensive and time consuming. Active Learning (AL) strategies reduce the need for huge volumes of labelled data by iteratively selecting a small number of examples for manual annotation based on their estimated utility in training the given model. In this paper, we argue that since AL strategies choose examples independently, they may potentially select similar examples, all of which do not aid in the learning process. We propose a method, referred to as Active^2 Learning (A^2L), that actively adapts to the sequence tagging model being trained, to further eliminate such redundant examples chosen by an AL strategy. We empirically demonstrate that A^2L improves the performance of state-of-the-art AL strategies on different sequence tagging tasks. Furthermore, we show that A^2L is widely applicable by using it in conjunction with different AL strategies and sequence tagging models. We demonstrate that the proposed A^2L able to reach full data F-score with ≈2-16 % less data compared to state-of-art AL strategies on different sequence tagging datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset