Unstructured Electronic Health Record (EHR) data often contains critical...
Instruction fine-tuning has recently emerged as a promising approach for...
An abundance of datasets exist for training and evaluating models on the...
Automated text simplification aims to produce simple versions of complex...
Medical systematic reviews are crucial for informing clinical decision m...
Large language models, particularly GPT-3, are able to produce high qual...
Relation extraction (RE) is the core NLP task of inferring semantic
rela...
Results from Randomized Controlled Trials (RCTs) establish the comparati...
We present TrialsSummarizer, a system that aims to automatically summari...
Large Language Models (LLMs) have yielded fast and dramatic progress in ...
Accessing medical literature is difficult for laypeople as the content i...
We consider the problem of identifying a minimal subset of training data...
Multi-document summarization entails producing concise synopses of
colle...
Interpretable entity representations (IERs) are sparse embeddings that a...
Many language tasks (e.g., Named Entity Recognition, Part-of-Speech tagg...
The primary goal of drug safety researchers and regulators is to promptl...
We provide a quantitative and qualitative analysis of self-repetition in...
Existing question answering (QA) datasets derived from electronic health...
Automated simplification models aim to make input texts more readable. S...
Medical question answering (QA) systems have the potential to answer
cli...
Training the large deep neural networks that dominate NLP requires large...
Pre-trained language models induce dense entity representations that off...
Large Transformers pretrained over clinical notes from Electronic Health...
Representations from large pretrained models such as BERT encode a range...
Recent work has shown that fine-tuning large networks is surprisingly
se...
We consider the problem of learning to simplify medical texts. This is
i...
Widespread adoption of deep models has motivated a pressing need for
app...
Unsupervised Data Augmentation (UDA) is a semi-supervised technique that...
The best evidence concerning comparative treatment effectiveness comes f...
We consider the problem of automatically generating a narrative biomedic...
We introduce Trialstreamer, a living database of clinical trial reports....
In this work, we consider the exponentially growing subarea of genetics ...
Modern deep learning models for NLP are notoriously opaque. This has
mot...
How do we most effectively treat a disease or condition? Ideally, we cou...
In many settings it is important for one to be able to understand why a ...
Electronic Health Records (EHRs) provide vital contextual information to...
Named Entity Recognition systems achieve remarkable performance on domai...
Named entity recognition systems perform well on standard datasets compr...
State-of-the-art models in NLP are now predominantly based on deep neura...
Hypertension is a major risk factor for stroke, cardiovascular disease, ...
Modern NLP systems require high-quality annotated data. In specialized
d...
The shift to electronic medical records (EMRs) has engendered research i...
How do we know if a particular medical treatment actually works? Ideally...
Attention mechanisms have seen wide adoption in neural NLP models. In
ad...
We present Variational Aspect-based Latent Topic Allocation (VALTA), a f...
We present Variational Aspect-Based Latent Dirichlet Allocation (VALDA),...
We propose a model for tagging unstructured texts with an arbitrary numb...
Active learning is a widely-used training strategy for maximizing predic...
We present a corpus of 5,000 richly annotated abstracts of medical artic...
We propose a method for learning disentangled sets of vector representat...