Happy People – Image Synthesis as Black-Box Optimization Problem in the Discrete Latent Space of Deep Generative Models

by   Steffen Jung, et al.
Max Planck Society

In recent years, optimization in the learned latent space of deep generative models has been successfully applied to black-box optimization problems such as drug design, image generation or neural architecture search. Existing models thereby leverage the ability of neural models to learn the data distribution from a limited amount of samples such that new samples from the distribution can be drawn. In this work, we propose a novel image generative approach that optimizes the generated sample with respect to a continuously quantifiable property. While we anticipate absolutely no practically meaningful application for the proposed framework, it is theoretically principled and allows to quickly propose samples at the mere boundary of the training data distribution. Specifically, we propose to use tree-based ensemble models as mathematical programs over the discrete latent space of vector quantized VAEs, which can be globally solved. Subsequent weighted retraining on these queries allows to induce a distribution shift. In lack of a practically relevant problem, we consider a visually appealing application: the generation of happily smiling faces (where the training distribution only contains less happy people) - and show the principled behavior of our approach in terms of improved FID and higher smile degree over baseline approaches.


Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

Many important problems in science and engineering, such as drug design,...

Pulling back information geometry

Latent space geometry has shown itself to provide a rich and rigorous fr...

Improving black-box optimization in VAE latent space using decoder uncertainty

Optimization in the latent space of variational autoencoders is a promis...

Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces

We consider the problem of optimizing combinatorial spaces (e.g., sequen...

Latent Space Optimal Transport for Generative Models

Variational Auto-Encoders enforce their learned intermediate latent-spac...

Learning from Small Data Through Sampling an Implicit Conditional Generative Latent Optimization Model

We revisit the long-standing problem of learning from small sample. In r...

Proper losses for discrete generative models

We initiate the study of proper losses for evaluating generative models ...

Please sign up or login with your details

Forgot password? Click here to reset