M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues

11/09/2019
by   Trisha Mittal, et al.
0

We present M3ER, a learning-based method for emotion recognition from multiple input modalities. Our approach combines cues from multiple co-occurring modalities (such as face, text, and speech) and also is more robust than other methods to sensor noise in any of the individual modalities. M3ER models a novel, data-driven multiplicative fusion method to combine the modalities, which learn to emphasize the more reliable cues and suppress others on a per-sample basis. By introducing a check step which uses Canonical Correlational Analysis to differentiate between ineffective and effective modalities, M3ER is robust to sensor noise. M3ER also generates proxy features in place of the ineffectual modalities. We demonstrate the efficiency of our network through experimentation on two benchmark datasets, IEMOCAP and CMU-MOSEI. We report a mean accuracy of 82.7 CMU-MOSEI, which, collectively, is an improvement of about 5

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset