Learning Disentangled Prompts for Compositional Image Synthesis

06/01/2023
by   Kihyuk Sohn, et al.
0

We study domain-adaptive image synthesis, the problem of teaching pretrained image generative models a new style or concept from as few as one image to synthesize novel images, to better understand the compositional image synthesis. We present a framework that leverages a pretrained class-conditional generation model and visual prompt tuning. Specifically, we propose a novel source class distilled visual prompt that learns disentangled prompts of semantic (e.g., class) and domain (e.g., style) from a few images. Learned domain prompt is then used to synthesize images of any classes in the style of target domain. We conduct studies on various target domains with the number of images ranging from one to a few to many, and show qualitative results which show the compositional generalization of our method. Moreover, we show that our method can help improve zero-shot domain adaptation classification accuracy.

READ FULL TEXT

page 5

page 6

page 15

page 16

page 17

page 18

page 19

page 23

research
05/21/2018

DiDA: Disentangled Synthesis for Domain Adaptation

Unsupervised domain adaptation aims at learning a shared model for two r...
research
10/26/2020

Geometrically Matched Multi-source Microscopic Image Synthesis Using Bidirectional Adversarial Networks

Microscopic images from different modality can provide more complete exp...
research
06/03/2022

Style-Content Disentanglement in Language-Image Pretraining Representations for Zero-Shot Sketch-to-Image Synthesis

In this work, we propose and validate a framework to leverage language-i...
research
05/04/2018

Transferring GANs: generating images from limited data

Transferring the knowledge of pretrained networks to new domains by mean...
research
03/29/2022

StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis

Although progress has been made for text-to-image synthesis, previous me...
research
09/14/2020

Zero-shot Synthesis with Group-Supervised Learning

Visual cognition of primates is superior to that of artificial neural ne...
research
04/29/2021

Learned Spatial Representations for Few-shot Talking-Head Synthesis

We propose a novel approach for few-shot talking-head synthesis. While r...

Please sign up or login with your details

Forgot password? Click here to reset