Exploiting Fully Convolutional Network and Visualization Techniques on Spontaneous Speech for Dementia Detection
In this paper, we exploit a Fully Convolutional Network (FCN) to analyze the audio data of spontaneous speech for dementia detection. A fully convolutional network accommodates speech samples with varying lengths, thus enabling us to analyze the speech sample without manual segmentation. Specifically, we first obtain the Mel Frequency Cepstral Coefficient (MFCC) feature map from each participant's audio data and convert the speech classification task on audio data to an image classification task on MFCC feature maps. Then, to solve the data insufficiency problem, we apply transfer learning by adopting a pre-trained backbone Convolutional Neural Network (CNN) model from the MobileNet architecture and the ImageNet dataset. We further build a convolutional layer to produce a heatmap using Otsu's method for visualization, enabling us to understand the impact of the time-series audio segments on the classification results. We demonstrate that our classification model achieves 66.7 ADReSS challenge. Through the visualization technique, we can evaluate the impact of audio segments, such as filled pauses from the participants and repeated questions from the investigator, on the classification results.
READ FULL TEXT