TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices

08/10/2020
by   Alexander Wong, et al.
0

Advances in deep learning have led to state-of-the-art performance across a multitude of speech recognition tasks. Nevertheless, the widespread deployment of deep neural networks for on-device speech recognition remains a challenge, particularly in edge scenarios where the memory and computing resources are highly constrained (e.g., low-power embedded devices) or where the memory and computing budget dedicated to speech recognition is low (e.g., mobile devices performing numerous tasks besides speech recognition). In this study, we introduce the concept of attention condensers for building low-footprint, highly-efficient deep neural networks for on-device speech recognition on the edge. An attention condenser is a self-attention mechanism that learns and produces a condensed embedding characterizing joint local and cross-channel activation relationships, and performs selective attention accordingly. To illustrate its efficacy, we introduce TinySpeech, low-precision deep neural networks comprising largely of attention condensers tailored for on-device speech recognition using a machine-driven design exploration strategy, with one tailored specifically with microcontroller operation constraints. Experimental results on the Google Speech Commands benchmark dataset for limited-vocabulary speech recognition showed that TinySpeech networks achieved significantly lower architectural complexity (as much as 507× fewer parameters), lower computational complexity (as much as 48× fewer multiply-add operations), and lower storage requirements (as much as 2028× lower weight memory requirements) when compared to previous work. These results not only demonstrate the efficacy of attention condensers for building highly efficient networks for on-device speech recognition, but also illuminate its potential for accelerating deep learning on the edge and empowering TinyML applications.

READ FULL TEXT
research
10/18/2018

EdgeSpeechNets: Highly Efficient Deep Neural Networks for Speech Recognition on the Edge

Despite showing state-of-the-art performance, deep learning for speech r...
research
09/30/2020

AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers

While significant advances in deep learning has resulted in state-of-the...
research
07/14/2023

Towards Model-Size Agnostic, Compute-Free, Memorization-based Inference of Deep Learning

The rapid advancement of deep neural networks has significantly improved...
research
07/23/2018

NullaNet: Training Deep Neural Networks for Reduced-Memory-Access Inference

Deep neural networks have been successfully deployed in a wide variety o...
research
09/17/2018

FermiNets: Learning generative machines to generate efficient neural networks via generative synthesis

The tremendous potential exhibited by deep learning is often offset by a...
research
04/29/2021

AttendSeg: A Tiny Attention Condenser Neural Network for Semantic Segmentation on the Edge

In this study, we introduce AttendSeg, a low-precision, highly compact d...

Please sign up or login with your details

Forgot password? Click here to reset