Unsupervised K-modal Styled Content Generation

01/10/2020
by   Omry Sendik, et al.
0

The emergence of generative models based on deep neural networks has recently enabled the automatic generation of massive amounts of graphical content, both in 2D and in 3D. Generative Adversarial Networks (GANs) and style control mechanisms, such as Adaptive Instance Normalization (AdaIN), have proved particularly effective in this context, culminating in the state-of-the-art StyleGAN architecture. While such models are able to learn diverse distributions, provided a sufficiently large training set, they are not well-suited for scenarios where the distribution of the training data exhibits a multi-modal behavior. In such cases, reshaping a uniform or normal distribution over the latent space into a complex multi-modal distribution in the data domain is challenging, and the quality of the generated samples may suffer as a result. Furthermore, the different modes are entangled with the other attributes of the data, and thus, mode transitions cannot be well controlled via continuous style parameters. In this paper, we introduce uMM-GAN, a novel architecture designed to better model such multi-modal distributions, in an unsupervised fashion. Building upon the StyleGAN architecture, our network learns multiple modes, in a completely unsupervised manner, and combines them using a set of learned weights. Quite strikingly, we show that this approach is capable of homing onto the natural modes in the training set, and effectively approximates the complex distribution as a superposition of multiple simple ones. We demonstrate that uMM-GAN copes better with multi-modal distributions, while at the same time disentangling between the modes and their style, thereby providing an independent degree of control over the generated content.

READ FULL TEXT

page 1

page 2

page 5

page 6

page 7

page 8

research
01/10/2020

Unsupervised multi-modal Styled Content Generation

The emergence of deep generative models has recently enabled the automat...
research
08/11/2019

GAN-Tree: An Incrementally Learned Hierarchical Generative Framework for Multi-Modal Data Distributions

Despite the remarkable success of generative adversarial networks, their...
research
04/20/2019

Social Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs

This paper proposes a novel approach for predicting the motion of pedest...
research
01/25/2019

Diversity-Sensitive Conditional Generative Adversarial Networks

We propose a simple yet highly effective method that addresses the mode-...
research
03/02/2022

On the application of generative adversarial networks for nonlinear modal analysis

Linear modal analysis is a useful and effective tool for the design and ...
research
08/16/2017

Geometric Enclosing Networks

Training model to generate data has increasingly attracted research atte...
research
01/23/2020

Expected Information Maximization: Using the I-Projection for Mixture Density Estimation

Modelling highly multi-modal data is a challenging problem in machine le...

Please sign up or login with your details

Forgot password? Click here to reset