Depa: Self-supervised audio embedding for depression detection
Depression detection research has increased over the last few decades as this disease is becoming a socially-centered problem. One major bottleneck for developing automatic depression detection methods lies on the limited data availability. Recently, pretrained text-embeddings have seen success in sparse data scenarios, while pretrained audio embeddings are rarely investigated. This paper proposes DEPA, a self-supervised, Word2Vec like pretrained depression audio embedding method for depression detection. An encoder-decoder network is used to extract DEPA on sparse-data in-domain (DAIC) and large-data out-domain (switchboard, Alzheimer's) datasets. With DEPA as the audio embedding, performance significantly outperforms traditional audio features regarding both classification and regression metrics. Moreover, we show that large-data out-domain pretraining is beneficial to depression detection performance.
READ FULL TEXT