DiffEdit: Diffusion-based semantic image editing with mask guidance

10/20/2022
by   Guillaume Couairon, et al.
0

Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.

READ FULL TEXT

page 1

page 4

page 7

page 8

page 16

page 17

page 18

page 19

research
05/27/2023

FISEdit: Accelerating Text-to-image Editing via Cache-enabled Sparse Diffusion Inference

Due to the recent success of diffusion models, text-to-image generation ...
research
05/08/2022

On Conditioning the Input Noise for Controlled Image Generation with Diffusion Models

Conditional image generation has paved the way for several breakthroughs...
research
05/27/2023

Text-to-image Editing by Image Information Removal

Diffusion models have demonstrated impressive performance in text-guided...
research
09/18/2023

Gradpaint: Gradient-Guided Inpainting with Diffusion Models

Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved ...
research
04/17/2023

MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing

Despite the success in large-scale text-to-image generation and text-con...
research
06/07/2023

Designing a Better Asymmetric VQGAN for StableDiffusion

StableDiffusion is a revolutionary text-to-image generator that is causi...
research
10/16/2020

Semantic Editing On Segmentation Map Via Multi-Expansion Loss

Semantic editing on segmentation map has been proposed as an intermediat...

Please sign up or login with your details

Forgot password? Click here to reset