Improving the Latent Space of Image Style Transfer

by   Yunpeng Bai, et al.

Existing neural style transfer researches have studied to match statistical information between the deep features of content and style images, which were extracted by a pre-trained VGG, and achieved significant improvement in synthesizing artistic images. However, in some cases, the feature statistics from the pre-trained encoder may not be consistent with the visual style we perceived. For example, the style distance between images of different styles is less than that of the same style. In such an inappropriate latent space, the objective function of the existing methods will be optimized in the wrong direction, resulting in bad stylization results. In addition, the lack of content details in the features extracted by the pre-trained encoder also leads to the content leak problem. In order to solve these issues in the latent space used by style transfer, we propose two contrastive training schemes to get a refined encoder that is more suitable for this task. The style contrastive loss pulls the stylized result closer to the same visual style image and pushes it away from the content image. The content contrastive loss enables the encoder to retain more available details. We can directly add our training scheme to some existing style transfer methods and significantly improve their results. Extensive experimental results demonstrate the effectiveness and superiority of our methods.


page 2

page 4

page 6

page 7

page 8

page 9


CLIPstyler: Image Style Transfer with a Single Text Condition

Existing neural style transfer methods require reference style images to...

Neural Crossbreed: Neural Based Image Metamorphosis

We propose Neural Crossbreed, a feed-forward neural network that can lea...

Language-Driven Image Style Transfer

Despite having promising results, style transfer, which requires prepari...

Rethinking and Improving the Robustness of Image Style Transfer

Extensive research in neural style transfer methods has shown that the c...

Conditional Neural Style Transfer with Peer-Regularized Feature Transform

This paper introduces a neural style transfer model to conditionally gen...

Style-Aware Contrastive Learning for Multi-Style Image Captioning

Existing multi-style image captioning methods show promising results in ...

Transflow Learning: Repurposing Flow Models Without Retraining

It is well known that deep generative models have a rich latent space, a...

Please sign up or login with your details

Forgot password? Click here to reset