Learning Multimodal Fixed-Point Weights using Gradient Descent

07/16/2019
by   Lukas Enderich, et al.
4

Due to their high computational complexity, deep neural networks are still limited to powerful processing units. To promote a reduced model complexity by dint of low-bit fixed-point quantization, we propose a gradient-based optimization strategy to generate a symmetric mixture of Gaussian modes (SGM) where each mode belongs to a particular quantization stage. We achieve 2-bit state-of-the-art performance and illustrate the model's ability for self-dependent weight adaptation during training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset