RuCLIP – new models and experiments: a technical report

02/22/2022
by   Alex Shonenkov, et al.
0

In the report we propose six new implementations of ruCLIP model trained on our 240M pairs. The accuracy results are compared with original CLIP model with Ru-En translation (OPUS-MT) on 16 datasets from different domains. Our best implementations outperform CLIP + OPUS-MT solution on most of the datasets in few-show and zero-shot tasks. In the report we briefly describe the implementations and concentrate on the conducted experiments. Inference execution time comparison is also presented in the report.

READ FULL TEXT

page 3

page 5

research
03/31/2021

Zero-Shot Language Transfer vs Iterative Back Translation for Unsupervised Machine Translation

This work focuses on comparing different solutions for machine translati...
research
07/13/2021

Zero-shot Speech Translation

Speech Translation (ST) is the task of translating speech in one languag...
research
06/24/2019

Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models

We study several methods for full or partial sharing of the decoder para...
research
12/14/2016

Unsupervised Clustering of Commercial Domains for Adaptive Machine Translation

In this paper, we report on domain clustering in the ambit of an adaptiv...
research
02/07/2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Recently, Language Models (LMs) instruction-tuned on multiple tasks, als...
research
12/15/2021

Faster Nearest Neighbor Machine Translation

kNN based neural machine translation (kNN-MT) has achieved state-of-the-...
research
12/04/2021

Emojich – zero-shot emoji generation using Russian language: a technical report

This technical report presents a text-to-image neural network "Emojich" ...

Please sign up or login with your details

Forgot password? Click here to reset