LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference

03/15/2022
by   Zhongzhi Yu, et al.
12

Low precision deep neural network (DNN) training is one of the most effective techniques for boosting DNNs' training efficiency, as it trims down the training cost from the finest bit level. While existing works mostly fix the model precision during the whole training process, a few pioneering works have shown that dynamic precision schedules help DNNs converge to a better accuracy while leading to a lower training cost than their static precision training counterparts. However, existing dynamic low precision training methods rely on manually designed precision schedules to achieve advantageous efficiency and accuracy trade-offs, limiting their more comprehensive practical applications and achievable performance. To this end, we propose LDP, a Learnable Dynamic Precision DNN training framework that can automatically learn a temporally and spatially dynamic precision schedule during training towards optimal accuracy and efficiency trade-offs. It is worth noting that LDP-trained DNNs are by nature efficient during inference. Furthermore, we visualize the resulting temporal and spatial precision schedule and distribution of LDP trained DNNs on different tasks to better understand the corresponding DNNs' characteristics at different training stages and DNN layers both during and after training, drawing insights for promoting further innovations. Extensive experiments and ablation studies (seven networks, five datasets, and three tasks) show that the proposed LDP consistently outperforms state-of-the-art (SOTA) low precision DNN training techniques in terms of training efficiency and achieved accuracy trade-offs. For example, in addition to having the advantage of being automated, our LDP achieves a 0.31% higher accuracy with a 39.1% lower computational cost when training ResNet-20 on CIFAR-10 as compared with the best SOTA method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2021

CPT: Efficient Deep Neural Network Training via Cyclic Precision

Low-precision deep neural network (DNN) training has gained tremendous a...
research
09/02/2020

Dual Precision Deep Neural Network

On-line Precision scalability of the deep neural networks(DNNs) is a cri...
research
07/08/2022

SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning

Neural architecture search (NAS) has demonstrated amazing success in sea...
research
11/22/2020

Third ArchEdge Workshop: Exploring the Design Space of Efficient Deep Neural Networks

This paper gives an overview of our ongoing work on the design space exp...
research
04/22/2021

InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks

The promise of Deep Neural Network (DNN) powered Internet of Thing (IoT)...
research
01/02/2017

Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution

We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-for...
research
05/03/2015

Visualization of Tradeoff in Evaluation: from Precision-Recall & PN to LIFT, ROC & BIRD

Evaluation often aims to reduce the correctness or error characteristics...

Please sign up or login with your details

Forgot password? Click here to reset