Multimodal Emotion Recognition Model using Physiological Signals

11/29/2019
by   Yuxuan Zhao, et al.
0

As an important field of research in Human-Machine Interactions, emotion recognition based on physiological signals has become research hotspots. Motivated by the outstanding performance of deep learning approaches in recognition tasks, we proposed a Multimodal Emotion Recognition Model that consists of a 3D convolutional neural network model, a 1D convolutional neural network model and a biologically inspired multimodal fusion model which integrates multimodal information on the decision level for emotion recognition. We use this model to classify four emotional regions from the arousal valence plane, i.e., low arousal and low valence (LALV), high arousal and low valence (HALV), low arousal and high valence (LAHV) and high arousal and high valence (HAHV) in the DEAP and AMIGOS dataset. The 3D CNN model and 1D CNN model are used for emotion recognition based on electroencephalogram (EEG) signals and peripheral physiological signals respectively, and get the accuracy of 93.53 Compared with the single-modal recognition, the multimodal fusion model improves the accuracy of emotion recognition by 5 of EEG signals (decomposed into four frequency bands) and peripheral physiological signals get the accuracy of 95.77 these two datasets respectively. Integrated EEG signals and peripheral physiological signals, this model could reach the highest accuracy about 99 both datasets which shows that our proposed method demonstrates certain advantages in solving the emotion recognition tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset