Customizing Number Representation and Precision

12/08/2022
by   Olivier Sentieys, et al.
0

There is a growing interest in the use of reduced-precision arithmetic, exacerbated by the recent interest in artificial intelligence, especially with deep learning. Most architectures already provide reduced-precision capabilities (e.g., 8-bit integer, 16-bit floating point). In the context of FPGAs, any number format and bit-width can even be considered.In computer arithmetic, the representation of real numbers is a major issue. Fixed-point (FxP) and floating-point (FlP) are the main options to represent reals, both with their advantages and drawbacks. This chapter presents both FxP and FlP number representations, and draws a fair a comparison between their cost, performance and energy, as well as their impact on accuracy during computations.It is shown that the choice between FxP and FlP is not obvious and strongly depends on the application considered. In some cases, low-precision floating-point arithmetic can be the most effective and provides some benefits over the classical fixed-point choice for energy-constrained applications.

READ FULL TEXT
research
04/14/2018

Low-Precision Floating-Point Schemes for Neural Network Training

The use of low-precision fixed-point arithmetic along with stochastic ro...
research
06/06/2022

8-bit Numerical Formats for Deep Neural Networks

Given the current trend of increasing size and complexity of machine lea...
research
06/09/2022

AritPIM: High-Throughput In-Memory Arithmetic

Digital processing-in-memory (PIM) architectures are rapidly emerging to...
research
04/12/2019

Leveraging the bfloat16 Artificial Intelligence Datatype For Higher-Precision Computations

In recent years fused-multiply-add (FMA) units with lower-precision mult...
research
04/04/2023

Reduced-Precision Floating-Point Arithmetic in Systolic Arrays with Skewed Pipelines

The acceleration of deep-learning kernels in hardware relies on matrix m...
research
07/07/2017

Sound Mixed-Precision Optimization with Rewriting

Finite-precision arithmetic computations face an inherent tradeoff betwe...
research
03/06/2018

Synthesizing Power and Area Efficient Image Processing Pipelines on FPGAs using Customized Bit-widths

High-level synthesis (HLS) has received significant attention in recent ...

Please sign up or login with your details

Forgot password? Click here to reset