Large-scale multi-modal contrastive pre-training has demonstrated great
...
The goal of this work is to build flexible video-language models that ca...
Vision-language (V+L) pretraining models have achieved great success in
...
Commonsense reasoning (CSR) requires the model to be equipped with gener...
It is often observed in knowledge-centric tasks (e.g., common sense ques...
Commonsense reasoning requires a model to make presumptions about world
...
Cross-lingual Summarization (CLS) aims at producing a summary in the tar...
Given the complexity of combinations of tasks, languages, and domains in...
With the abundance of automatic meeting transcripts, meeting summarizati...
A commonly observed problem with abstractive summarization is the distor...
A commonly observed problem with abstractive summarization is the distor...
Learning multilingual representations of text has proven a successful me...
Formality style transformation is the task of modifying the formality of...
This paper describes the ARIEL-CMU submissions to the Low Resource Human...
Cross-lingual transfer of word embeddings aims to establish the semantic...
Cross-lingual text classification(CLTC) is the task of classifying docum...