Due to the unbalanced training data distribution, the language ability o...
Vision-language models (VLMs) have shown impressive performance in
subst...
Automatic metrics play a crucial role in machine translation. Despite th...
Neural machine translation has achieved promising results on many transl...
Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4,...
Large language models (LLMs) have demonstrated remarkable potential in
h...
Benefiting from the sequence-level knowledge distillation, the
Non-Autor...
Augmenting the base neural model with a token-level symbolic datastore i...
Nearest Neighbor Machine Translation (kNNMT) is a simple and effective m...
Abstractive summarization is the process of generating a summary given a...
Recently, non-autoregressive (NAR) neural machine translation models hav...
kNN-MT presents a new paradigm for domain adaptation by building an exte...
As one of the challenging NLP tasks, designing math word problem (MWP)
s...
In recent years, vision and language pre-training (VLP) models have adva...
Domain adaptation is an important challenge for neural machine translati...
The numerical reasoning in the financial domain – performing quantitativ...
Complaining is a speech act that expresses a negative inconsistency betw...
Recently, parallel text generation has received widespread attention due...
We study the problem of online learning with human feedback in the
human...
How to effectively adapt neural machine translation (NMT) models accordi...
Recently, kNN-MT has shown the promising capability of directly
incorpor...
Unknown intent detection aims to identify the out-of-distribution (OOD)
...
kNN-MT, recently proposed by Khandelwal et al. (2020a), successfully com...
Machine Translation Quality Estimation (QE) is a task of predicting the
...
Sequence-to-sequence (seq2seq) problems such as machine translation are
...
Non-autoregressive Transformer is a promising text generation model. How...
Social recommendation is effective in improving the recommendation
perfo...
Previous domain adaptation research usually neglect the diversity in
tra...
Unsupervised Bilingual Dictionary Induction methods based on the
initial...
Aspect-based sentiment analysis (ABSA) aims at analyzing the sentiment o...
Aspect-level sentiment classification (ALSC) and aspect oriented opinion...
Discourse context has been proven useful when translating documents. It ...
Recent studies show that the attention heads in Transformer are not equa...
Cross-prompt automated essay scoring (AES) requires the system to use no...
It is well-understood that different algorithms, training processes, and...
Transformer, based on the encoder-decoder framework, has achieved
state-...
Document-level machine translation manages to outperform sentence level
...
Target-oriented opinion words extraction (TOWE) is a new subtask of ABSA...
Pre-training and fine-tuning have achieved great success in the natural
...
Non-autoregressive models are promising on various text generation tasks...
Transformer model has been widely used on machine translation tasks and
...
Neural machine translation systems tend to fail on less de-cent inputs
d...
Natural Language Inference (NLI) aims to determine the logic relationshi...
In sequence labeling, previous domain adaptation methods focus on the
ad...
Monolingual data has been demonstrated to be helpful in improving the
tr...
Relation detection is a core step in many natural language process
appli...
State-of-the-art machine translation models are still not on par with hu...
Variational auto-encoders (VAEs) are widely used in natural language
gen...
Previous studies have shown that neural machine translation (NMT) models...
Previous studies show that incorporating external information could impr...