Learning Disentangled Representations with Semi-Supervised Deep Generative Models

06/01/2017
by   N. Siddharth, et al.
0

Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic encoder and decoder network. Typically these models encode all features of the data into a single variable. Here we are interested in learning disentangled representations that encode distinct aspects of the data into separate variables. We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder. This allows us to train partially-specified models that make relatively strong assumptions about a subset of interpretable variables and rely on the flexibility of neural networks to learn representations for the remaining variables. We further define a general objective for semi-supervised learning in this model class, which can be approximated using an importance sampling procedure. We evaluate our framework's ability to learn disentangled representations, both by qualitative exploration of its generative capacity, and quantitative evaluation of its discriminative ability on a variety of models and datasets.

READ FULL TEXT

page 3

page 5

page 7

page 8

research
04/17/2018

DGPose: Disentangled Semi-supervised Deep Generative Models for Human Body Analysis

Deep generative modelling for robust human body analysis is an emerging ...
research
09/15/2017

Disentangled Variational Auto-Encoder for Semi-supervised Learning

In this paper, we develop a novel approach for semi-supervised VAE witho...
research
03/07/2018

Inferencing Based on Unsupervised Learning of Disentangled Representations

Combining Generative Adversarial Networks (GANs) with encoders that lear...
research
11/22/2016

Inducing Interpretable Representations with Variational Autoencoders

We develop a framework for incorporating structured graphical models in ...
research
10/12/2016

Deep disentangled representations for volumetric reconstruction

We introduce a convolutional neural network for inferring a compact dise...
research
04/14/2021

Disentangling Representations of Text by Masking Transformers

Representations from large pretrained models such as BERT encode a range...
research
02/18/2021

VAE Approximation Error: ELBO and Conditional Independence

The importance of Variational Autoencoders reaches far beyond standalone...

Please sign up or login with your details

Forgot password? Click here to reset