Unsupervised Learning of Sentence Representations Using Sequence Consistency

08/10/2018
by   Siddhartha Brahma, et al.
0

Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose a simple, yet surprisingly powerful unsupervised method to learn such representations by enforcing consistency constraints on sequences of tokens. We consider two classes of such constraints -- within tokens that form a sentence and between two sequences that form a sentence when merged. We learn a sentence encoder by training it to distinguish between consistent and inconsistent examples. Extensive evaluation on several transfer learning and linguistic probing tasks shows improved performance over strong baselines, substantially surpassing them in several cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset