MaGIC: Multi-modality Guided Image Completion

by   Yongsheng Yu, et al.
Institute of Software, Chinese Academy of Sciences
University of the Chinese Academy of Sciences
University of North Texas

The vanilla image completion approaches are sensitive to the large missing regions due to limited available reference information for plausible generation. To mitigate this, existing methods incorporate the extra cue as a guidance for image completion. Despite improvements, these approaches are often restricted to employing a single modality (e.g., segmentation or sketch maps), which lacks scalability in leveraging multi-modality for more plausible completion. In this paper, we propose a novel, simple yet effective method for Multi-modal Guided Image Completion, dubbed MaGIC, which not only supports a wide range of single modality as the guidance (e.g., text, canny edge, sketch, segmentation, reference image, depth, and pose), but also adapts to arbitrarily customized combination of these modalities (i.e., arbitrary multi-modality) for image completion. For building MaGIC, we first introduce a modality-specific conditional U-Net (MCU-Net) that injects single-modal signal into a U-Net denoiser for single-modal guided image completion. Then, we devise a consistent modality blending (CMB) method to leverage modality signals encoded in multiple learned MCU-Nets through gradient guidance in latent space. Our CMB is training-free, and hence avoids the cumbersome joint re-training of different modalities, which is the secret of MaGIC to achieve exceptional flexibility in accommodating new modalities for completion. Experiments show the superiority of MaGIC over state-of-arts and its generalization to various completion tasks including in/out-painting and local editing. Our project with code and models is available at


page 2

page 3

page 7

page 8

page 9


A Novel Unified Conditional Score-based Generative Framework for Multi-modal Medical Image Completion

Multi-modal medical image completion has been extensively applied to all...

Cognitively Inspired Cross-Modal Data Generation Using Diffusion Models

Most existing cross-modal generative methods based on diffusion models u...

Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation

Text-conditional diffusion models are able to generate high-fidelity ima...

Unified Multi-Modal Image Synthesis for Missing Modality Imputation

Multi-modal medical images provide complementary soft-tissue characteris...

Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation

We propose a new deep learning method for tumour segmentation when deali...

SceneTrilogy: On Scene Sketches and its Relationship with Text and Photo

We for the first time extend multi-modal scene understanding to include ...

Clustering-Induced Generative Incomplete Image-Text Clustering (CIGIT-C)

The target of image-text clustering (ITC) is to find correct clusters by...

Please sign up or login with your details

Forgot password? Click here to reset