A Method for Training-free Person Image Picture Generation

05/16/2023
by   Tianyu Chen, et al.
0

The current state-of-the-art Diffusion model has demonstrated excellent results in generating images. However, the images are monotonous and are mostly the result of the distribution of images of people in the training set, making it challenging to generate multiple images for a fixed number of individuals. This problem can often only be solved by fine-tuning the training of the model. This means that each individual/animated character image must be trained if it is to be drawn, and the hardware and cost of this training is often beyond the reach of the average user, who accounts for the largest number of people. To solve this problem, the Character Image Feature Encoder model proposed in this paper enables the user to use the process by simply providing a picture of the character to make the image of the character in the generated image match the expectation. In addition, various details can be adjusted during the process using prompts. Unlike traditional Image-to-Image models, the Character Image Feature Encoder extracts only the relevant image features, rather than information about the model's composition or movements. In addition, the Character Image Feature Encoder can be adapted to different models after training. The proposed model can be conveniently incorporated into the Stable Diffusion generation process without modifying the model's ontology or used in combination with Stable Diffusion as a joint model.

READ FULL TEXT

page 2

page 4

page 6

research
05/16/2023

Generating coherent comic with rich story using ChatGPT and Stable Diffusion

Past work demonstrated that using neural networks, we can extend unfinis...
research
03/17/2023

GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation

Text-to-image (T2I) models based on diffusion processes have achieved re...
research
03/29/2023

A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion

Despite the record-breaking performance in Text-to-Image (T2I) generatio...
research
08/02/2023

Reverse Stable Diffusion: What prompt was used to generate this image?

Text-to-image diffusion models such as Stable Diffusion have recently at...
research
12/12/2022

Diff-Font: Diffusion Model for Robust One-Shot Font Generation

Font generation is a difficult and time-consuming task, especially in th...
research
05/15/2023

A Reproducible Extraction of Training Images from Diffusion Models

Recently, Carlini et al. demonstrated the widely used model Stable Diffu...
research
10/31/2022

Intelligent Painter: Picture Composition With Resampling Diffusion Model

Have you ever thought that you can be an intelligent painter? This means...

Please sign up or login with your details

Forgot password? Click here to reset