Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks

01/16/2020
by   Léopold Cambier, et al.
0

Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models: ResNet-50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2019

Training Deep Neural Networks Using Posit Number System

With the increasing size of Deep Neural Network (DNN) models, the high m...
research
01/31/2023

Training with Mixed-Precision Floating-Point Assignments

When training deep neural networks, keeping all tensors in high precisio...
research
10/13/2017

TensorQuant - A Simulation Toolbox for Deep Neural Network Quantization

Recent research implies that training and inference of deep neural netwo...
research
04/19/2018

Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression

Deep learning algorithms have shown tremendous success in many recogniti...
research
10/10/2017

Mixed Precision Training

Deep neural networks have enabled progress in a wide variety of applicat...
research
10/28/2019

Adaptive Loss Scaling for Mixed Precision Training

Mixed precision training (MPT) is becoming a practical technique to impr...
research
01/03/2020

Learning Accurate Integer Transformer Machine-Translation Models

We describe a method for training accurate Transformer machine-translati...

Please sign up or login with your details

Forgot password? Click here to reset