On the performance of residual block design alternatives in convolutional neural networks for end-to-end audio classification

by   Javier Naranjo-Alcazar, et al.

Residual learning is a recently proposed learning framework to facilitate the training of very deep neural networks. Residual blocks or units are made of a set of stacked layers, where the inputs are added back to their outputs with the aim of creating identity mappings. In practice, such identity mappings are accomplished by means of the so-called skip or residual connections. However, multiple implementation alternatives arise with respect to where such skip connections are applied within the set of stacked layers that make up a residual block. While ResNet architectures for image classification using convolutional neural networks (CNNs) have been widely discussed in the literature, few works have adopted ResNet architectures so far for 1D audio classification tasks. Thus, the suitability of different residual block designs for raw audio classification is partly unknown. The purpose of this paper is to analyze and discuss the performance of several residual block implementations within a state-of-the-art CNN-based architecture for end-to-end audio classification using raw audio waveforms. For comparison purposes, we analyze as well the performance of the residual blocks under a similar 2D architecture using a conventional time-frequency audio represen-tation as input. The results show that the achieved accuracy is considerably dependent, not only on the specific residual block implementation, but also on the selected input normalization.


Identity Mappings in Deep Residual Networks

Deep residual networks have emerged as a family of extremely deep archit...

Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms

Sound Event Detection and Audio Classification tasks are traditionally a...

Tandem Blocks in Deep Convolutional Neural Networks

Due to the success of residual networks (resnets) and related architectu...

Constrained Linear Data-feature Mapping for Image Classification

In this paper, we propose a constrained linear data-feature mapping mode...

Polynomial Networks in Deep Classifiers

Deep neural networks have been the driving force behind the success in c...

Deep Convolutional Neural Networks with Merge-and-Run Mappings

A deep residual network, built by stacking a sequence of residual blocks...

Feature Embedding by Template Matching as a ResNet Block

Convolution blocks serve as local feature extractors and are the key to ...

Please sign up or login with your details

Forgot password? Click here to reset