Extracting the Locus of Attention at a Cocktail Party from Single-Trial EEG using a Joint CNN-LSTM Model
Human brain performs remarkably well in segregating a particular speaker from interfering speakers in a multi-speaker scenario. It has been recently shown that we can quantitatively evaluate the segregation capability by modelling the relationship between the speech signals present in an auditory scene and the cortical signals of the listener measured using electroencephalography (EEG). This has opened up avenues to integrate neuro-feedback into hearing aids whereby the device can infer user's attention and enhance the attended speaker. Commonly used algorithms to infer the auditory attention are based on linear systems theory where the speech cues such as envelopes are mapped on to the EEG signals. Here, we present a joint convolutional neural network (CNN) - long short-term memory (LSTM) model to infer the auditory attention. Our joint CNN-LSTM model takes the EEG signals and the spectrogram of the multiple speakers as inputs and classifies the attention to one of the speakers. We evaluated the reliability of our neural network using three different datasets comprising of 61 subjects where, each subject undertook a dual-speaker experiment. The three datasets analysed corresponded to speech stimuli presented in three different languages namely German, Danish and Dutch. Using the proposed joint CNN-LSTM model, we obtained a median decoding accuracy of 77.2 amount of sparsity that our model can tolerate by means of magnitude pruning and found that the model can tolerate up to 50 loss of decoding accuracy.
READ FULL TEXT