Decoding speech from non-invasive brain recordings

by   Alexandre Défossez, et al.

Decoding language from brain activity is a long-awaited goal in both healthcare and neuroscience. Major milestones have recently been reached thanks to intracranial devices: subject-specific pipelines trained on invasive brain responses to basic language tasks now start to efficiently decode interpretable features (e.g. letters, words, spectrograms). However, scaling this approach to natural speech and non-invasive brain recordings remains a major challenge. Here, we propose a single end-to-end architecture trained with contrastive learning across a large cohort of individuals to predict self-supervised representations of natural speech. We evaluate our model on four public datasets, encompassing 169 volunteers recorded with magneto- or electro-encephalography (M/EEG), while they listened to natural speech. The results show that our model can identify, from 3s of MEG signals, the corresponding speech segment with up to 72.5 distinct segments (and 44 segments for EEG recordings – hence allowing the decoding of phrases absent from the training set. Model comparison and ablation analyses show that these performances directly benefit from our original design choices, namely the use of (i) a contrastive objective, (ii) pretrained representations of speech and (iii) a common convolutional architecture simultaneously trained across several participants. Together, these results delineate a promising path to decode natural language processing in real time from non-invasive recordings of brain activity.


page 2

page 4


Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

Until recently, human behavioral data from reading has mainly been of in...

Deep Recurrent Encoder: A scalable end-to-end network to model brain signals

Understanding how the brain responds to sensory inputs is challenging: b...

Brain2Char: A Deep Architecture for Decoding Text from Brain Recordings

Decoding language representations directly from the brain can enable new...

BrainBERT: Self-supervised representation learning for intracranial recordings

We create a reusable Transformer, BrainBERT, for intracranial recordings...

Brains and pseudorandom generators

In a pioneering classic, Warren McCulloch and Walter Pitts proposed a mo...

Shift-invariant waveform learning on epileptic ECoG

Seizure detection algorithms must discriminate abnormal neuronal activit...

Please sign up or login with your details

Forgot password? Click here to reset