MIXGAN: Learning Concepts from Different Domains for Mixture Generation

07/04/2018
by   Guang-Yuan Hao, et al.
0

In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e.g., content and style) from different domains and thus generating a new domain with learned concepts. In particular, we propose a mixture generative adversarial network (MIXGAN). MIXGAN learns concepts of content and style from two domains respectively, and thus can join them for mixture generation in a new domain, i.e., generating images with content from one domain and style from another. MIXGAN overcomes the limitation of current GAN-based models which either generate new images in the same domain as they observed in training stage, or require off-the-shelf content templates for transferring or translation. Extensive experimental results demonstrate the effectiveness of MIXGAN as compared to related state-of-the-art GAN-based models.

READ FULL TEXT

page 5

page 6

page 7

research
11/14/2018

Style and Content Disentanglement in Generative Adversarial Networks

Disentangling factors of variation within data has become a very challen...
research
12/30/2020

Unpaired Image Enhancement with Quality-Attention Generative Adversarial Network

In this work, we aim to learn an unpaired image enhancement model, which...
research
06/08/2023

Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models

Text-to-image generative models have enabled high-resolution image synth...
research
08/04/2020

TOAD-GAN: Coherent Style Level Generation from a Single Example

In this work, we present TOAD-GAN (Token-based One-shot Arbitrary Dimens...
research
07/22/2020

MI^2GAN: Generative Adversarial Network for Medical Image Domain Adaptation using Mutual Information Constraint

Domain shift between medical images from multicentres is still an open q...
research
04/20/2023

Not Only Generative Art: Stable Diffusion for Content-Style Disentanglement in Art Analysis

The duality of content and style is inherent to the nature of art. For h...
research
06/17/2020

Multi-Domain Level Generation and Blending with Sketches via Example-Driven BSP and Variational Autoencoders

Procedural content generation via machine learning (PCGML) has demonstra...

Please sign up or login with your details

Forgot password? Click here to reset