User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks

08/09/2018
by   Yuanzheng Ci, et al.
0

Scribble colors based line art colorization is a challenging computer vision problem since neither greyscale values nor semantic information is presented in line arts, and the lack of authentic illustration-line art training pairs also increases difficulty of model generalization. Recently, several Generative Adversarial Nets (GANs) based methods have achieved great success. They can generate colorized illustrations conditioned on given line art and color hints. However, these methods fail to capture the authentic illustration distributions and are hence perceptually unsatisfying in the sense that they are often lack of accurate shading. To address these challenges, we propose a novel deep conditional adversarial architecture for scribble based anime line art colorization. Specifically, we integrate the conditional framework with WGAN-GP criteria as well as the perceptual loss to enable us to robustly train a deep network that makes the synthesized images more natural and real. We also introduce a local features network that is independent of synthetic data. With GANs conditioned on features from such network, we notably increase the generalization capability over "in the wild" line arts. Furthermore, we collect two datasets that provide high-quality colorful illustrations and authentic line arts for training and benchmarking. With the proposed model trained on our illustration dataset, we demonstrate that images synthesized by the presented approach are considerably more realistic and precise than alternative approaches.

READ FULL TEXT

page 5

page 7

research
02/01/2018

Face Aging with Contextual Generative Adversarial Nets

Face aging, which renders aging faces for an input face, has attracted e...
research
04/14/2020

Line Art Correlation Matching Network for Automatic Animation Colorization

Automatic animation line art colorization is a challenging computer visi...
research
06/08/2018

Generating Image Sequence from Description with LSTM Conditional GAN

Generating images from word descriptions is a challenging task. Generati...
research
11/15/2018

Conditional GANs for Multi-Illuminant Color Constancy: Revolution or Yet Another Approach?

Non-uniform and multi-illuminant color constancy are important tasks, th...
research
03/28/2019

GANs-NQM: A Generative Adversarial Networks based No Reference Quality Assessment Metric for RGB-D Synthesized Views

In this paper, we proposed a no-reference (NR) quality metric for RGB pl...
research
04/07/2022

PetroGAN: A novel GAN-based approach to generate realistic, label-free petrographic datasets

Deep learning architectures have enriched data analytics in the geoscien...
research
10/31/2019

Co-Generation with GANs using AIS based HMC

Inferring the most likely configuration for a subset of variables of a j...

Please sign up or login with your details

Forgot password? Click here to reset