Composer: Creative and Controllable Image Synthesis with Composable Conditions

by   Lianghua Huang, et al.

Recent large-scale generative models learned on big data are capable of synthesizing incredible images yet suffer from limited controllability. This work offers a new generation paradigm that allows flexible control of the output image, such as spatial layout and palette, while maintaining the synthesis quality and model creativity. With compositionality as the core idea, we first decompose an image into representative factors, and then train a diffusion model with all these factors as the conditions to recompose the input. At the inference stage, the rich intermediate representations work as composable elements, leading to a huge design space (i.e., exponentially proportional to the number of decomposed factors) for customizable content creation. It is noteworthy that our approach, which we call Composer, supports various levels of conditions, such as text description as the global information, depth map and sketch as the local guidance, color histogram for low-level details, etc. Besides improving controllability, we confirm that Composer serves as a general framework and facilitates a wide range of classical generative tasks without retraining. Code and models will be made available.


page 3

page 4

page 5

page 6

page 7

page 14

page 15

page 16


Late-Constraint Diffusion Guidance for Controllable Image Synthesis

Diffusion models, either with or without text condition, have demonstrat...

T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models

The incredible generative ability of large-scale text-to-image (T2I) mod...

Text-Guided Scene Sketch-to-Photo Synthesis

We propose a method for scene-level sketch-to-photo synthesis with text ...

Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

In recent years, Generative Adversarial Networks have achieved impressiv...

Sketch-Guided Text-to-Image Diffusion Models

Text-to-Image models have introduced a remarkable leap in the evolution ...

Factor Decomposed Generative Adversarial Networks for Text-to-Image Synthesis

Prior works about text-to-image synthesis typically concatenated the sen...

Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models

Text-to-image diffusion models have advanced towards more controllable g...

Please sign up or login with your details

Forgot password? Click here to reset