BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights

01/19/2018
by   Penghang Yin, et al.
0

We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights. The set constraint that characterizes the quantization of weights is not imposed until the late stage of training, and a sequence of pseudo quantized weights is maintained. Specifically, we relax the hard constraint into a continuous regularizer via Moreau envelope, which turns out to be the squared Euclidean distance to the set of quantized weights. The pseudo quantized weights are obtained by linearly interpolating between the float weights and their quantizations. A continuation strategy is adopted to push the weights towards the quantized state by gradually increasing the regularization parameter. In the second phase, exact quantization scheme with a small learning rate is invoked to guarantee fully quantized weights. We test BinaryRelax on the benchmark CIFAR-10 and CIFAR-100 color image datasets to demonstrate the superiority of the relaxed quantization approach and the improved accuracy over the state-of-the-art training methods. Finally, we prove the convergence of BinaryRelax under an approximate orthogonality condition.

READ FULL TEXT
research
12/10/2020

Recurrence of Optimum for Training Weight and Activation Quantized Networks

Deep neural networks (DNNs) are quantized for efficient inference on res...
research
06/07/2016

Deep neural networks are robust to weight binarization and other non-linear distortions

Recent results show that deep neural networks achieve excellent performa...
research
02/29/2020

Gradient-Based Deep Quantization of Neural Networks through Sinusoidal Adaptive Regularization

As deep neural networks make their ways into different domains, their co...
research
11/24/2018

On Periodic Functions as Regularizers for Quantization of Neural Networks

Deep learning models have been successfully used in computer vision and ...
research
11/07/2022

AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks

In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSG...
research
05/04/2019

SinReQ: Generalized Sinusoidal Regularization for Automatic Low-Bitwidth Deep Quantized Training

Quantization of neural networks offers significant promise in reducing t...
research
08/10/2016

Approximate search with quantized sparse representations

This paper tackles the task of storing a large collection of vectors, su...

Please sign up or login with your details

Forgot password? Click here to reset