Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation

11/22/2022
by   Narek Tumanyan, et al.
0

Large-scale text-to-image generative models have been a revolutionary breakthrough in the evolution of generative AI, allowing us to synthesize diverse images that convey highly complex visual concepts. However, a pivotal challenge in leveraging such models for real-world content creation tasks is providing users with control over the generated content. In this paper, we present a new framework that takes text-to-image synthesis to the realm of image-to-image translation – given a guidance image and a target text prompt, our method harnesses the power of a pre-trained text-to-image diffusion model to generate a new image that complies with the target text, while preserving the semantic layout of the source image. Specifically, we observe and empirically demonstrate that fine-grained control over the generated structure can be achieved by manipulating spatial features and their self-attention inside the model. This results in a simple and effective approach, where features extracted from the guidance image are directly injected into the generation process of the target image, requiring no training or fine-tuning and applicable for both real or generated guidance images. We demonstrate high-quality results on versatile text-guided image translation tasks, including translating sketches, rough drawings and animations into realistic images, changing of the class and appearance of objects in a given image, and modifications of global qualities such as lighting and color.

READ FULL TEXT

page 1

page 3

page 5

page 7

page 9

page 10

page 13

page 14

research
02/05/2023

Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation

Diffusion models are able to generate photorealistic images in arbitrary...
research
05/29/2023

Conditional Score Guidance for Text-Driven Image-to-Image Translation

We present a novel algorithm for text-driven image-to-image translation ...
research
08/25/2022

DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation

Large text-to-image models achieved a remarkable leap in the evolution o...
research
05/19/2023

A Unified Prompt-Guided In-Context Inpainting Framework for Reference-based Image Manipulations

Recent advancements in Text-to-Image (T2I) generative models have yielde...
research
06/14/2023

GBSD: Generative Bokeh with Stage Diffusion

The bokeh effect is an artistic technique that blurs out-of-focus areas ...
research
06/29/2023

Filtered-Guided Diffusion: Fast Filter Guidance for Black-Box Diffusion Models

Recent advances in diffusion-based generative models have shown incredib...
research
09/06/2023

My Art My Choice: Adversarial Protection Against Unruly AI

Generative AI is on the rise, enabling everyone to produce realistic con...

Please sign up or login with your details

Forgot password? Click here to reset