Disentangled Variational Autoencoder for Emotion Recognition in Conversations

05/23/2023
by   Kailai Yang, et al.
0

In Emotion Recognition in Conversations (ERC), the emotions of target utterances are closely dependent on their context. Therefore, existing works train the model to generate the response of the target utterance, which aims to recognise emotions leveraging contextual information. However, adjacent response generation ignores long-range dependencies and provides limited affective information in many cases. In addition, most ERC models learn a unified distributed representation for each utterance, which lacks interpretability and robustness. To address these issues, we propose a VAD-disentangled Variational AutoEncoder (VAD-VAE), which first introduces a target utterance reconstruction task based on Variational Autoencoder, then disentangles three affect representations Valence-Arousal-Dominance (VAD) from the latent space. We also enhance the disentangled representations by introducing VAD supervision signals from a sentiment lexicon and minimising the mutual information between VAD distributions. Experiments show that VAD-VAE outperforms the state-of-the-art model on two datasets. Further analysis proves the effectiveness of each proposed module and the quality of disentangled VAD representations. The code is available at https://github.com/SteveKGYang/VAD-VAE.

READ FULL TEXT

page 8

page 12

research
02/07/2023

Cluster-Level Contrastive Learning for Emotion Recognition in Conversations

A key challenge for Emotion Recognition in Conversations (ERC) is to dis...
research
03/02/2023

DAVA: Disentangling Adversarial Variational Autoencoder

The use of well-disentangled representations offers many advantages for ...
research
04/09/2021

AdCOFE: Advanced Contextual Feature Extraction in Conversations for emotion classification

Emotion recognition in conversations is an important step in various vir...
research
03/09/2022

The Transitive Information Theory and its Application to Deep Generative Models

Paradoxically, a Variational Autoencoder (VAE) could be pushed in two op...
research
07/20/2020

It's LeVAsa not LevioSA! Latent Encodings for Valence-Arousal Structure Alignment

In recent years, great strides have been made in the field of affective ...
research
04/20/2022

6GCVAE: Gated Convolutional Variational Autoencoder for IPv6 Target Generation

IPv6 scanning has always been a challenge for researchers in the field o...
research
08/27/2020

Learning Representations of Endoscopic Videos to Detect Tool Presence Without Supervision

In this work, we explore whether it is possible to learn representations...

Please sign up or login with your details

Forgot password? Click here to reset