1-D CNN based Acoustic Scene Classification via Reducing Layer-wise Dimensionality

03/31/2022
by   Arshdeep Singh, et al.
0

This paper presents an alternate representation framework to commonly used time-frequency representation for acoustic scene classification (ASC). A raw audio signal is represented using a pre-trained convolutional neural network (CNN) using its various intermediate layers. The study assumes that the representations obtained from the intermediate layers lie in low-dimensions intrinsically. To obtain low-dimensional embeddings, principal component analysis is performed, and the study analyzes that only a few principal components are significant. However, the appropriate number of significant components are not known. To address this, an automatic dictionary learning framework is utilized that approximates the underlying subspace. Further, the low-dimensional embeddings are aggregated in a late-fusion manner in the ensemble framework to incorporate hierarchical information learned at various intermediate layers. The experimental evaluation is performed on publicly available DCASE 2017 and 2018 ASC datasets on a pre-trained 1-D CNN, SoundNet. Empirically, it is observed that deeper layers show more compression ratio than others. At 70 similar to that obtained without performing any dimensionality reduction. The proposed framework outperforms the time-frequency representation based methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset