VCE: Variational Convertor-Encoder for One-Shot Generalization
Variational Convertor-Encoder (VCE) converts an image to various styles; we present this novel architecture for the problem of one-shot generalization and its transfer to new tasks not seen before without additional training. We also improve the performance of variational auto-encoder (VAE) to filter those blurred points using a novel algorithm proposed by us, namely large margin VAE (LMVAE). Two samples with the same property are input to the encoder, and then a convertor is required to processes one of them from the noisy outputs of the encoder; finally, the noise represents a variety of transformation rules and is used to convert new images. The algorithm that combines and improves the condition variational auto-encoder (CVAE) and introspective VAE, we propose this new framework aim to transform graphics instead of generating them; it is used for the one-shot generative process. No sequential inference algorithmic is needed in training. Compared to recent Omniglot datasets, the results show that our model produces more realistic and diverse images.
READ FULL TEXT