Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning

02/03/2018
by   Minghai Chen, et al.
0

With the increasing popularity of video sharing websites such as YouTube and Facebook, multimodal sentiment analysis has received increasing attention from the scientific community. Contrary to previous works in multimodal sentiment analysis which focus on holistic information in speech segments such as bag of words representations and average facial expression intensity, we develop a novel deep architecture for multimodal sentiment analysis that performs modality fusion at the word level. In this paper, we propose the Gated Multimodal Embedding LSTM with Temporal Attention (GME-LSTM(A)) model that is composed of 2 modules. The Gated Multimodal Embedding alleviates the difficulties of fusion when there are noisy modalities. The LSTM with Temporal Attention performs word level fusion at a finer fusion resolution between input modalities and attends to the most important time steps. As a result, the GME-LSTM(A) is able to better model the multimodal structure of speech through time and perform better sentiment comprehension. We demonstrate the effectiveness of this approach on the publicly-available Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis (CMU-MOSI) dataset by achieving state-of-the-art sentiment classification and regression results. Qualitative analysis on our model emphasizes the importance of the Temporal Attention Layer in sentiment prediction because the additional acoustic and visual modalities are noisy. We also demonstrate the effectiveness of the Gated Multimodal Embedding in selectively filtering these noisy modalities out. Our results and analysis open new areas in the study of sentiment analysis in human communication and provide new models for multimodal fusion.

READ FULL TEXT

page 7

page 8

research
08/25/2022

Cross-Modality Gated Attention Fusion for Multimodal Sentiment Analysis

Multimodal sentiment analysis is an important research task to predict t...
research
03/01/2022

Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors

Multimodal sentiment analysis has attracted increasing attention and lot...
research
09/15/2021

Learning to Aggregate and Refine Noisy Labels for Visual Sentiment Analysis

Visual sentiment analysis has received increasing attention in recent ye...
research
03/03/2023

Meme Sentiment Analysis Enhanced with Multimodal Spatial Encoding and Facial Embedding

Internet memes are characterised by the interspersing of text amongst vi...
research
12/21/2019

Look, Read and Feel: Benchmarking Ads Understanding with Multimodal Multitask Learning

Given the massive market of advertising and the sharply increasing onlin...
research
02/07/2017

Gated Multimodal Units for Information Fusion

This paper presents a novel model for multimodal learning based on gated...
research
10/24/2018

Multimodal Polynomial Fusion for Detecting Driver Distraction

Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 a...

Please sign up or login with your details

Forgot password? Click here to reset