Training Quantised Neural Networks with STE Variants: the Additive Noise Annealing Algorithm

by   Matteo Spallanzani, et al.

Training quantised neural networks (QNNs) is a non-differentiable optimisation problem since weights and features are output by piecewise constant functions. The standard solution is to apply the straight-through estimator (STE), using different functions during the inference and gradient computation steps. Several STE variants have been proposed in the literature aiming to maximise the task accuracy of the trained network. In this paper, we analyse STE variants and study their impact on QNN training. We first observe that most such variants can be modelled as stochastic regularisations of stair functions; although this intuitive interpretation is not new, our rigorous discussion generalises to further variants. Then, we analyse QNNs mixing different regularisations, finding that some suitably synchronised smoothing of each layer map is required to guarantee pointwise compositional convergence to the target discontinuous function. Based on these theoretical insights, we propose additive noise annealing (ANA), a new algorithm to train QNNs encompassing standard STE and its variants as special cases. When testing ANA on the CIFAR-10 image classification benchmark, we find that the major impact on task accuracy is not due to the qualitative shape of the regularisations but to the proper synchronisation of the different STE variants used in a network, in accordance with the theoretical results.


page 1

page 2

page 3

page 4


Additive Noise Annealing and Approximation Properties of Quantized Neural Networks

We present a theoretical and experimental investigation of the quantizat...

On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks

Next generation deep neural networks for classification hosted on embedd...

Differentiable Approximation Bridges For Training Networks Containing Non-Differentiable Functions

Modern neural network training relies on piece-wise (sub-)differentiable...

Sequential Training of Neural Networks with Gradient Boosting

This paper presents a novel technique based on gradient boosting to trai...

Distributed Global Optimization by Annealing

The paper considers a distributed algorithm for global minimization of a...

Predict-and-recompute conjugate gradient variants

The standard implementation of the conjugate gradient algorithm suffers ...

Embedded hyper-parameter tuning by Simulated Annealing

We propose a new metaheuristic training scheme that combines Stochastic ...

Please sign up or login with your details

Forgot password? Click here to reset