Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference

10/21/2017
by   Bradley McDanel, et al.
0

We propose the use of incomplete dot products (IDP) to dynamically adjust the number of input channels used in each layer of a convolutional neural network during feedforward inference. IDP adds monotonically non-increasing coefficients, referred to as a "profile", to the channels during training. The profile orders the contribution of each channel in non-increasing order. At inference time, the number of channels used can be dynamically adjusted to trade off accuracy for lowered power consumption and reduced latency by selecting only a beginning subset of channels. This approach allows for a single network to dynamically scale over a computation range, as opposed to training and deploying multiple networks to support different levels of computation scaling. Additionally, we extend the notion to multiple profiles, each optimized for some specific range of computation scaling. We present experiments on the computation and accuracy trade-offs of IDP for popular image classification models and datasets. We demonstrate that, for MNIST and CIFAR-10, IDP reduces computation significantly, e.g., by 75 significantly compromising accuracy. We argue that IDP provides a convenient and effective means for devices to lower computation costs dynamically to reflect the current computation budget of the system. For example, VGG-16 with 50 CIFAR-10 dataset compared to the standard network which achieves only 35 accuracy when using the reduced channel set.

READ FULL TEXT

page 1

page 4

research
07/19/2016

Runtime Configurable Deep Neural Networks for Energy-Accuracy Trade-off

We present a novel dynamic configuration technique for deep neural netwo...
research
05/28/2019

RecNets: Channel-wise Recurrent Convolutional Neural Networks

In this paper, we introduce Channel-wise recurrent convolutional neural ...
research
12/01/2022

TCN-CUTIE: A 1036 TOp/s/W, 2.72 uJ/Inference, 12.2 mW All-Digital Ternary Accelerator in 22 nm FDX Technology

Tiny Machine Learning (TinyML) applications impose uJ/Inference constrai...
research
05/07/2021

ResMLP: Feedforward networks for image classification with data-efficient training

We present ResMLP, an architecture built entirely upon multi-layer perce...
research
04/02/2020

Learning Sparse Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)

Deep neural networks (DNN) have shown remarkable success in a variety of...
research
08/16/2017

BitNet: Bit-Regularized Deep Neural Networks

We present a novel regularization scheme for training deep neural networ...
research
08/17/2022

Interference Cancellation GAN Framework for Dynamic Channels

Symbol detection is a fundamental and challenging problem in modern comm...

Please sign up or login with your details

Forgot password? Click here to reset