Transformers for Headline Selection for Russian News Clusters

06/19/2021
by   Pavel Voropaev, et al.
0

In this paper, we explore various multilingual and Russian pre-trained transformer-based models for the Dialogue Evaluation 2021 shared task on headline selection. Our experiments show that the combined approach is superior to individual multilingual and monolingual models. We present an analysis of a number of ways to obtain sentence embeddings and learn a ranking model on top of them. We achieve the result of 87.28 private test sets respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset