Machine learning for the recognition of emotion in the speech of couples in psychotherapy using the Stanford Suppes Brain Lab Psychotherapy Dataset

01/14/2019
by   Colleen E. Crangle, et al.
0

The automatic recognition of emotion in speech can inform our understanding of language, emotion, and the brain. It also has practical application to human-machine interactive systems. This paper examines the recognition of emotion in naturally occurring speech, where there are no constraints on what is said or the emotions expressed. This task is more difficult than that using data collected in scripted, experimentally controlled settings, and fewer results are published. Our data come from couples in psychotherapy. Video and audio recordings were made of three couples (A, B, C) over 18 hour-long therapy sessions. This paper describes the method used to code the audio recordings for the four emotions of Anger, Sadness, Joy and Tension, plus Neutral, also covering our approach to managing the unbalanced samples that a naturally occurring emotional speech dataset produces. Three groups of acoustic features were used in our analysis: filter-bank, frequency, and voice-quality features. The random forests model classified the features. Recognition rates are reported for each individual, the result of the speaker-dependent models that we built. In each case, the best recognition rates were achieved using the filter-bank features alone. For Couple A, these rates were 90 and 87 Couple B, the rates were 84 recognition of all four emotions plus Neutral. For Couple C, a rate of 88 achieved for the female for the recognition of the four emotions plus Neutral and 95 the rates ranged from 76 that couple therapy is a rich context for the study of emotion in naturally occurring speech.

READ FULL TEXT
research
11/14/2022

Describing emotions with acoustic property prompts for speech emotion recognition

Emotions lie on a broad continuum and treating emotions as a discrete nu...
research
03/30/2022

Automatic Detection of Expressed Emotion from Five-Minute Speech Samples: Challenges and Opportunities

We present a novel feasibility study on the automatic recognition of Exp...
research
12/10/2019

Measuring Mother-Infant Emotions By Audio Sensing

It has been suggested in developmental psychology literature that the co...
research
08/16/2022

"Are you okay, honey?": Recognizing Emotions among Couples Managing Diabetes in Daily Life using Multimodal Real-World Smartwatch Data

Couples generally manage chronic diseases together and the management ta...
research
10/05/2016

Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest

Besides spoken words, speech signals also carry information about speake...
research
05/22/2023

EMNS /Imz/ Corpus: An emotive single-speaker dataset for narrative storytelling in games, television and graphic novels

The increasing adoption of text-to-speech technologies has led to a grow...
research
03/24/2022

Does human speech follow Benford's Law?

Researchers have observed that the frequencies of leading digits in many...

Please sign up or login with your details

Forgot password? Click here to reset