Learning and Evaluating Sparse Interpretable Sentence Embeddings

09/23/2018
by   Valentin Trifonov, et al.
0

Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data. In this paper, we transfer this idea to sentence embeddings and explore several approaches to obtain a sparse representation. We further introduce a novel, quantitative and automated evaluation metric for sentence embedding interpretability, based on topic coherence methods. We observe an increase in interpretability compared to dense models, on a dataset of movie dialogs and on the scene descriptions from the MS COCO dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2020

Evaluating Sparse Interpretable Word Embeddings for Biomedical Domain

Word embeddings have found their way into a wide range of natural langua...
research
11/24/2016

Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images

In this paper, we focus on training and evaluating effective word embedd...
research
11/23/2017

SPINE: SParse Interpretable Neural Embeddings

Prediction without justification has limited utility. Much of the succes...
research
03/03/2016

MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification

We introduce a novel, simple convolution neural network (CNN) architectu...
research
11/11/2017

Interpretable probabilistic embeddings: bridging the gap between topic models and neural networks

We consider probabilistic topic models and more recent word embedding te...
research
06/25/2020

Background Knowledge Injection for Interpretable Sequence Classification

Sequence classification is the supervised learning task of building mode...
research
12/10/2017

Inducing Interpretability in Knowledge Graph Embeddings

We study the problem of inducing interpretability in KG embeddings. Spec...

Please sign up or login with your details

Forgot password? Click here to reset