SqueezeNext: Hardware-Aware Neural Network Design

03/23/2018
by   Amir Gholami, et al.
0

One of the main barriers for deploying neural networks on embedded systems has been large memory and power consumption of existing neural networks. In this work, we introduce SqueezeNext, a new family of neural network architectures whose design was guided by considering previous architectures such as SqueezeNet, as well as by simulation results on a neural network accelerator. This new network is able to match AlexNet's accuracy on the ImageNet benchmark with 112× fewer parameters, and one of its deeper variants is able to achieve VGG-19 accuracy with only 4.4 Million parameters, (31× smaller than VGG-19). SqueezeNext also achieves better top-5 classification accuracy with 1.3× fewer parameters as compared to MobileNet, but avoids using depthwise-separable convolutions that are inefficient on some mobile processor platforms. This wide range of accuracy gives the user the ability to make speed-accuracy tradeoffs, depending on the available resources on the target hardware. Using hardware simulation results for power and inference speed on an embedded system has guided us to design variations of the baseline model that are 2.59×/8.26× faster and 2.25×/7.5× more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.

READ FULL TEXT

page 11

page 12

research
09/08/2019

TMA: Tera-MACs/W Neural Hardware Inference Accelerator with a Multiplier-less Massive Parallel Processor

Computationally intensive Inference tasks of Deep neural networks have e...
research
10/13/2022

A Near-Sensor Processing Accelerator for Approximate Local Binary Pattern Networks

In this work, a high-speed and energy-efficient comparator-based Near-Se...
research
11/21/2018

Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs

Using FPGAs to accelerate ConvNets has attracted significant attention i...
research
02/27/2022

Arrhythmia Classifier Using Convolutional Neural Network with Adaptive Loss-aware Multi-bit Networks Quantization

Cardiovascular disease (CVDs) is one of the universal deadly diseases, a...
research
04/02/2021

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference

We design a family of image classification architectures that optimize t...
research
09/09/2020

Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware

Keyword spotting (KWS) provides a critical user interface for many mobil...
research
01/29/2020

Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks

The high energy cost of processing deep convolutional neural networks im...

Please sign up or login with your details

Forgot password? Click here to reset