Training Quantized Deep Neural Networks via Cooperative Coevolution

12/23/2021
by   Fu Peng, et al.
0

This work considers a challenging Deep Neural Network (DNN) quantization task that seeks to train quantized DNNs without involving any full-precision operations. Most previous quantization approaches are not applicable to this task since they rely on full-precision gradients to update network weights. To fill this gap, in this work we advocate using Evolutionary Algorithms (EAs) to search for the optimal low-bits weights of DNNs. To efficiently solve the induced large-scale discrete problem, we propose a novel EA based on cooperative coevolution that repeatedly groups the network weights (variables) based on the confidence in their values and focuses on optimizing the ones with the least confidence. To the best of our knowledge, this is the first work that applies EAs to train quantized DNNs. Experiments show that our approach surpasses previous quantization approaches and can train a 4-bit ResNet-20 on the Cifar-10 dataset with the same test accuracy as its full-precision counterpart.

READ FULL TEXT
research
07/26/2018

LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

Although weight and activation quantization is an effective approach for...
research
09/05/2019

Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers

Deep neural network (DNN) quantization converting floating-point (FP) da...
research
09/01/2018

Learning Low Precision Deep Neural Networks through Regularization

We consider the quantization of deep neural networks (DNNs) to produce l...
research
12/18/2019

Neural Networks Weights Quantization: Target None-retraining Ternary (TNT)

Quantization of weights of deep neural networks (DNN) has proven to be a...
research
02/02/2020

SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks

Designing a deep neural network (DNN) with good generalization capabilit...
research
07/28/2021

MARViN – Multiple Arithmetic Resolutions Vacillating in Neural Networks

Quantization is a technique for reducing deep neural networks (DNNs) tra...
research
05/04/2019

SinReQ: Generalized Sinusoidal Regularization for Automatic Low-Bitwidth Deep Quantized Training

Quantization of neural networks offers significant promise in reducing t...

Please sign up or login with your details

Forgot password? Click here to reset