Generating Continuous Representations of Medical Texts

05/15/2018
by   Graham Spinks, et al.
0

We present an architecture that generates medical texts while learning an informative, continuous representation with discriminative features. During training the input to the system is a dataset of captions for medical X-Rays. The acquired continuous representations are of particular interest for use in many machine learning techniques where the discrete and high-dimensional nature of textual input is an obstacle. We use an Adversarially Regularized Autoencoder to create realistic text in both an unconditional and conditional setting. We show that this technique is applicable to medical texts which often contain syntactic and domain-specific shorthands. A quantitative evaluation shows that we achieve a lower model perplexity than a traditional LSTM generator.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2023

Med-EASi: Finely Annotated Dataset and Models for Controllable Simplification of Medical Texts

Automatic medical text simplification can assist providers with patient-...
research
11/28/2018

Sequence Learning with RNNs for Medical Concept Normalization in User-Generated Texts

In this work, we consider the medical concept normalization problem, i.e...
research
06/25/2017

Automated text summarisation and evidence-based medicine: A survey of two domains

The practice of evidence-based medicine (EBM) urges medical practitioner...
research
10/02/2019

Clinical Text Generation through Leveraging Medical Concept and Relations

With a neural sequence generation model, this study aims to develop a me...
research
06/13/2017

Adversarially Regularized Autoencoders

While autoencoders are a key technique in representation learning for co...
research
09/13/2022

Continuous Design Control for Machine Learning in Certified Medical Systems

Continuous software engineering has become commonplace in numerous field...

Please sign up or login with your details

Forgot password? Click here to reset