LIA: Latently Invertible Autoencoder with Adversarial Learning

06/19/2019
by   Jiapeng Zhu, et al.
17

Deep generative models play an increasingly important role in machine learning and computer vision. However there are two fundamental issues hindering real-world applications of these techniques: the learning difficulty of variational inference in Variational AutoEncoder (VAE) and the functional absence of encoding samples in Generative Adversarial Network (GAN). In this paper, we manage to address these issues in one framework by proposing a novel algorithm named Latently Invertible Autoencoder (LIA). A deep invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. Thus the partial encoder first transforms inputs to be feature vectors and then the distribution of these feature vectors is reshaped to approach a prior by the invertible network. The decoder proceeds in reverse order of composite mappings of the complete encoder. The two-stage stochasticity-free training is devised to train LIA via adversarial learning, in the sense that we first train a standard GAN whose generator is the decoder of LIA and then an autoencoder in the adversarial manner by detaching the invertible network from LIA. Experiments conducted on the FFHQ dataset validate the effectiveness of LIA for inference and generation tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset