Listening to the World Improves Speech Command Recognition

10/23/2017
by   Brian McMahan, et al.
0

We study transfer learning in convolutional network architectures applied to the task of recognizing audio, such as environmental sound events and speech commands. Our key finding is that not only is it possible to transfer representations from an unrelated task like environmental sound classification to a voice-focused task like speech command recognition, but also that doing so improves accuracies significantly. We also investigate the effect of increased model capacity for transfer learning audio, by first validating known results from the field of Computer Vision of achieving better accuracies with increasingly deeper networks on two audio datasets: UrbanSound8k and the newly released Google Speech Commands dataset. Then we propose a simple multiscale input representation using dilated convolutions and show that it is able to aggregate larger contexts and increase classification performance. Further, the models trained using a combination of transfer learning and multiscale input representations need only 40 accuracies as a freshly trained model with 100 we demonstrate a positive interaction effect for the multiscale input and transfer learning, making a case for the joint application of the two techniques.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset