Learning Hierarchical Features from Generative Models

by   Shengjia Zhao, et al.

Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning tasks. Generative models, on the other hand, have benefited less from hierarchical models with multiple layers of latent variables. In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with existing variational methods, and provide some limitations on the kind of features existing models can learn. Finally we propose an alternative architecture that do not suffer from these limitations. Our model is able to learn highly interpretable and disentangled hierarchical features on several natural image datasets with no task specific regularization or prior knowledge.


page 5

page 7


The Variational Homoencoder: Learning to learn high capacity generative models from few examples

Hierarchical Bayesian methods can unify many related tasks (e.g. k-shot ...

Variational Sequential Labelers for Semi-Supervised Learning

We introduce a family of multitask variational methods for semi-supervis...

Learning Robust Feature Representations for Scene Text Detection

Scene text detection based on deep neural networks have progressed subst...

Hierarchical Few-Shot Generative Models

A few-shot generative model should be able to generate data from a distr...

Hierarchical Bayesian image analysis: from low-level modeling to robust supervised learning

Within a supervised classification framework, labeled data are used to l...

Generative Archimedean Copulas

We propose a new generative modeling technique for learning multidimensi...

Semantically-informed Hierarchical Event Modeling

Prior work has shown that coupling sequential latent variable models wit...

Please sign up or login with your details

Forgot password? Click here to reset