Big-PERCIVAL: Exploring the Native Use of 64-Bit Posit Arithmetic in Scientific Computing

05/11/2023
by   David Mallasén, et al.
0

The accuracy requirements in many scientific computing workloads result in the use of double-precision floating-point arithmetic in the execution kernels. Nevertheless, emerging real-number representations, such as posit arithmetic, show promise in delivering even higher accuracy in such computations. In this work, we explore the native use of 64-bit posits in a series of numerical benchmarks extracted from the PolyBench collection and compare their timing performance, accuracy and hardware cost to IEEE 754 doubles. For this, we extend the PERCIVAL RISC-V core and the Xposit custom RISC-V extension with posit64 and quire operations. Results show that posit64 can execute as fast as doubles, while also obtaining up to 4 orders of magnitude lower mean square error and up to 3 orders of magnitude lower maximum absolute error. However, leveraging the quire accumulator register can limit the order of some operations such as matrix multiplications. Furthermore, detailed FPGA synthesis results highlight the significant hardware cost of 64-bit posit arithmetic and quire. Despite this, the large accuracy improvements achieved with the same memory bandwidth suggest that posit arithmetic may provide a potential alternative representation for scientific computing.

READ FULL TEXT

page 7

page 8

page 12

research
11/30/2021

PERCIVAL: Open-Source Posit RISC-V Core with Quire Capability

The posit representation for real numbers is an alternative to the ubiqu...
research
05/23/2023

Open-Source GEMM Hardware Kernels Generator: Toward Numerically-Tailored Computations

Many scientific computing problems can be reduced to Matrix-Matrix Multi...
research
04/04/2018

End-to-End DNN Training with Block Floating Point Arithmetic

DNNs are ubiquitous datacenter workloads, requiring orders of magnitude ...
research
09/16/2021

The Accuracy and Efficiency of Posit Arithmetic

Motivated by the increasing interest in the posit numeric format, in thi...
research
04/12/2019

Leveraging the bfloat16 Artificial Intelligence Datatype For Higher-Precision Computations

In recent years fused-multiply-add (FMA) units with lower-precision mult...
research
12/16/2021

On the accuracy and performance of the lattice Boltzmann method with 64-bit, 32-bit and novel 16-bit number formats

Fluid dynamics simulations with the lattice Boltzmann method (LBM) are v...
research
03/11/2021

Memristive Stochastic Computing for Deep Learning Parameter Optimization

Stochastic Computing (SC) is a computing paradigm that allows for the lo...

Please sign up or login with your details

Forgot password? Click here to reset