Improving Bi-directional Generation between Different Modalities with Variational Autoencoders

01/26/2018
by   Masahiro Suzuki, et al.
0

We investigate deep generative models that can exchange multiple modalities bi-directionally, e.g., generating images from corresponding texts and vice versa. A major approach to achieve this objective is to train a model that integrates all the information of different modalities into a joint representation and then to generate one modality from the corresponding other modality via this joint representation. We simply applied this approach to variational autoencoders (VAEs), which we call a joint multimodal variational autoencoder (JMVAE). However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully. Furthermore, we confirmed that this difficulty cannot be resolved even using a known solution. Therefore, in this study, we propose two models to prevent this difficulty: JMVAE-kl and JMVAE-h. Results of our experiments demonstrate that these methods can prevent the difficulty above and that they generate modalities bi-directionally with equal or higher likelihood than conventional VAE methods, which generate in only one direction. Moreover, we confirm that these methods can obtain the joint representation appropriately, so that they can generate various variations of modality by moving over the joint representation or changing the value of another modality.

READ FULL TEXT

page 6

page 7

research
11/07/2016

Joint Multimodal Learning with Deep Generative Models

We investigate deep generative models that can exchange multiple modalit...
research
03/06/2016

Variational methods for Conditional Multimodal Deep Learning

In this paper, we address the problem of conditional modality learning, ...
research
05/19/2023

Improving Multimodal Joint Variational Autoencoders through Normalizing Flows and Correlation Analysis

We propose a new multimodal variational autoencoder that enables to gene...
research
04/11/2022

Mixture-of-experts VAEs can disregard variation in surjective multimodal data

Machine learning systems are often deployed in domains that entail data ...
research
06/09/2022

Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization

A number of variational autoencoders (VAEs) have recently emerged with t...
research
09/07/2022

Benchmarking Multimodal Variational Autoencoders: GeBiD Dataset and Toolkit

Multimodal Variational Autoencoders (VAEs) have been a subject of intens...
research
11/14/2020

Speech Prediction in Silent Videos using Variational Autoencoders

Understanding the relationship between the auditory and visual signals i...

Please sign up or login with your details

Forgot password? Click here to reset