DATE: Detecting Anomalies in Text via Self-Supervision of Transformers

04/12/2021
by   Andrei Manolache, et al.
8

Leveraging deep learning models for Anomaly Detection (AD) has seen widespread use in recent years due to superior performances over traditional methods. Recent deep methods for anomalies in images learn better features of normality in an end-to-end self-supervised setting. These methods train a model to discriminate between different transformations applied to visual data and then use the output to compute an anomaly score. We use this approach for AD in text, by introducing a novel pretext task on text sequences. We learn our DATE model end-to-end, enforcing two independent and complementary self-supervision signals, one at the token-level and one at the sequence-level. Under this new task formulation, we show strong quantitative and qualitative results on the 20Newsgroups and AG News datasets. In the semi-supervised setting, we outperform state-of-the-art results by +13.5 In the unsupervised configuration, DATE surpasses all other methods even when 10 the others).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset