Learning Diverse Tone Styles for Image Retouching

by   Haolin Wang, et al.
Harbin Institute of Technology

Image retouching, aiming to regenerate the visually pleasing renditions of given images, is a subjective task where the users are with different aesthetic sensations. Most existing methods deploy a deterministic model to learn the retouching style from a specific expert, making it less flexible to meet diverse subjective preferences. Besides, the intrinsic diversity of an expert due to the targeted processing on different images is also deficiently described. To circumvent such issues, we propose to learn diverse image retouching with normalizing flow-based architectures. Unlike current flow-based methods which directly generate the output image, we argue that learning in a style domain could (i) disentangle the retouching styles from the image content, (ii) lead to a stable style presentation form, and (iii) avoid the spatial disharmony effects. For obtaining meaningful image tone style representations, a joint-training pipeline is delicately designed, which is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module. In particular, the style encoder predicts the target style representation of an input image, which serves as the conditional information in the RetouchNet for retouching, while the TSFlow maps the style representation vector into a Gaussian distribution in the forward pass. After training, the TSFlow can generate diverse image tone style vectors by sampling from the Gaussian distribution. Extensive experiments on MIT-Adobe FiveK and PPR10K datasets show that our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results to satisfy different human aesthetic preferences. Source code and pre-trained models are publicly available at https://github.com/SSRHeart/TSFlow.


page 1

page 3

page 4

page 5

page 6

page 8

page 10

page 11


ADS-Cap: A Framework for Accurate and Diverse Stylized Captioning with Unpaired Stylistic Corpora

Generating visually grounded image captions with specific linguistic sty...

StyleRemix: An Interpretable Representation for Neural Image Style Transfer

Multi-Style Transfer (MST) intents to capture the high-level visual voca...

StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation

Generating images that fit a given text description using machine learni...

Conditional Variational Image Deraining

Image deraining is an important yet challenging image processing task. T...

PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization

In a joint vision-language space, a text feature (e.g., from "a photo of...

Style-Guided Inference of Transformer for High-resolution Image Synthesis

Transformer is eminently suitable for auto-regressive image synthesis wh...

Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts

A few-shot font generation (FFG) method has to satisfy two objectives: t...

Code Repositories

Please sign up or login with your details

Forgot password? Click here to reset