In Grammatical Error Correction (GEC), it is crucial to ensure the user'...
Discriminatory social biases, including gender biases, have been found i...
Pre-trained language models trained on large-scale data have learned ser...
Large Language Models (LLMs) have achieved human-level fluency in text
g...
In scholarly documents, figures provide a straightforward way of
communi...
Large-scale pre-trained language models such as GPT-3 have shown remarka...
Large Language Models (LLMs) have demonstrated remarkable performance in...
Humans work together to solve common problems by having discussions,
exp...
A promising approach for knowledge-based Word Sense Disambiguation (WSD)...
Document-level relation extraction (DocRE) is the task of identifying al...
Numerous types of social biases have been identified in pre-trained lang...
IR models using a pretrained language model significantly outperform lex...
We study the relationship between task-agnostic intrinsic and task-speci...
Non-autoregressive (NAR) models can generate sentences with less computa...
Impressive performance of Transformer has been attributed to self-attent...
Logical table-to-text generation is a task that involves generating logi...
Combining multiple source embeddings to create meta-embeddings is consid...
Formality style transfer (FST) is a task that involves paraphrasing an
i...
Subword regularizations use multiple subword segmentations during traini...
Grammatical Error Correction (GEC) should not focus only on high accurac...
South and North Korea both use the Korean language. However, Korean NLP
...
Neural models trained with large amount of parallel data have achieved
i...
Logical Natural Language Generation, i.e., generating textual descriptio...
This paper explores a variant of automatic headline generation methods, ...
Since traditional tokenizers are isolated from a downstream task and mod...
Large-scale pretraining and task-specific fine-tuning is now the standar...
In this study, a novel method for extracting named entities and relation...
We present a multi-task learning framework for cross-lingual abstractive...
Video-guided machine translation as one of multimodal neural machine
tra...
The performance of neural machine translation systems is commonly evalua...
Most studies on abstractive summarization re-port ROUGE scores between s...
Most neural machine translation (NMT) models operate on source and targe...
We propose a data-to-text generation model with two modules, one for tra...
Neural encoder-decoder models have been successful in natural language
g...
Browsing news articles on multiple devices is now possible. The lengths ...
The encoder-decoder model is widely used in natural language generation
...
In this paper, we compose a new task for deep argumentative structure
an...
This study addresses the problem of identifying the meaning of unknown w...
Learning distributed representations for relation instances is a central...
We present in this paper our approach for modeling inter-topic preferenc...
This paper connects a vector-based composition model to a formal semanti...
Additive composition (Foltz et al, 1998; Landauer and Dumais, 1997; Mitc...