Strong and Simple Baselines for Multimodal Utterance Embeddings

05/14/2019
by   Paul Pu Liang, et al.
0

Human language is a rich multimodal signal consisting of spoken words, facial expressions, body gestures, and vocal intonations. Learning representations for these spoken utterances is a complex research problem due to the presence of multiple heterogeneous sources of information. Recent advances in multimodal learning have followed the general trend of building more complex models that utilize various attention, memory and recurrent components. In this paper, we propose two simple but strong baselines to learn embeddings of multimodal utterances. The first baseline assumes a conditional factorization of the utterance into unimodal factors. Each unimodal factor is modeled using the simple form of a likelihood function obtained via a linear transformation of the embedding. We show that the optimal embedding can be derived in closed form by taking a weighted average of the unimodal features. In order to capture richer representations, our second baseline extends the first by factorizing into unimodal, bimodal, and trimodal factors, while retaining simplicity and efficiency during learning and inference. From a set of experiments across two tasks, we show strong performance on both supervised and semi-supervised multimodal prediction, as well as significant (10 times) speedups over neural models during inference. Overall, we believe that our strong baseline models offer new benchmarking options for future research in multimodal learning.

READ FULL TEXT
research
06/16/2018

Learning Factorized Multimodal Representations

Learning representations of multimodal data is a fundamentally complex r...
research
04/29/2020

Interpretable Multimodal Routing for Human Multimodal Language

The human language has heterogeneous sources of information, including t...
research
03/25/2017

Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings

Integrating visual and linguistic information into a single multimodal r...
research
07/02/2021

Unsupervised Spoken Utterance Classification

An intelligent virtual assistant (IVA) enables effortless conversations ...
research
05/24/2021

Recent Advances and Trends in Multimodal Deep Learning: A Review

Deep Learning has implemented a wide range of applications and has becom...
research
06/25/2021

Closed-form Continuous-Depth Models

Continuous-depth neural models, where the derivative of the model's hidd...
research
09/20/2019

Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data

Understanding passenger intents from spoken interactions and car's visio...

Please sign up or login with your details

Forgot password? Click here to reset