End-to-end Keyword Spotting using Neural Architecture Search and Quantization

04/14/2021
by   David Peter, et al.
0

This paper introduces neural architecture search (NAS) for the automatic discovery of end-to-end keyword spotting (KWS) models in limited resource environments. We employ a differentiable NAS approach to optimize the structure of convolutional neural networks (CNNs) operating on raw audio waveforms. After a suitable KWS model is found with NAS, we conduct quantization of weights and activations to reduce the memory footprint. We conduct extensive experiments on the Google speech commands dataset. In particular, we compare our end-to-end approach to mel-frequency cepstral coefficient (MFCC) based systems. For quantization, we compare fixed bit-width quantization and trained bit-width quantization. Using NAS only, we were able to obtain a highly efficient model with an accuracy of 95.55 trained bit-width quantization, the same model achieves a test accuracy of 93.76 weight.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2020

Resource-efficient DNNs for Keyword Spotting using Neural Architecture Search and Quantization

This paper introduces neural architecture search (NAS) for the automatic...
research
06/27/2023

DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs

Neural architecture search (NAS) proves to be among the effective approa...
research
06/17/2022

Channel-wise Mixed-precision Assignment for DNN Inference on Constrained Edge Nodes

Quantization is widely employed in both cloud and edge systems to reduce...
research
03/04/2022

Improving the Energy Efficiency and Robustness of tinyML Computer Vision using Log-Gradient Input Images

This paper studies the merits of applying log-gradient input images to c...
research
07/12/2019

Deep Model Compression via Filter Auto-sampling

The recent WSNet [1] is a new model compression method through sampling ...
research
05/14/2023

MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization

Arbitrary bit-width network quantization has received significant attent...
research
04/07/2022

ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less Neural Networks

Multiplication-less neural networks significantly reduce the time and en...

Please sign up or login with your details

Forgot password? Click here to reset