Neural abstractive summarization models are susceptible to generating
fa...
Existing abstractive summarization models lack explicit control mechanis...
Novel neural architectures, training strategies, and the availability of...
Influence functions approximate the 'influences' of training data-points...
For protein sequence datasets, unlabeled data has greatly outpaced label...
Interpretability techniques in NLP have mainly focused on understanding
...
Human creativity is often described as the mental process of combining
a...
To assist human review process, we build a novel ReviewRobot to automati...
A standard way to address different NLP problems is by first constructin...
Class-conditional language models (CC-LMs) can be used to generate natur...
We introduce DART, a large dataset for open-domain structured data recor...
Transformer architectures have proven to learn useful representations fo...
Word embeddings derived from human-generated corpora inherit strong gend...
Neural networks lack the ability to reason about qualitative physics and...
State-of-the-art models in NLP are now predominantly based on deep neura...
Deep learning models perform poorly on tasks that require commonsense
re...
Ensembling methods are well known for improving prediction accuracy. How...
We present results on combining supervised and unsupervised methods to
e...