From A to Z: Supervised Transfer of Style and Content Using Deep Neural Network Generators

03/07/2016
by   Paul Upchurch, et al.
0

We propose a new neural network architecture for solving single-image analogies - the generation of an entire set of stylistically similar images from just a single input image. Solving this problem requires separating image style from content. Our network is a modified variational autoencoder (VAE) that supports supervised training of single-image analogies and in-network evaluation of outputs with a structured similarity objective that captures pixel covariances. On the challenging task of generating a 62-letter font from a single example letter we produce images with 22.4 ground truth than state-of-the-art.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset