Precision-aware Deterministic and Probabilistic Error Bounds for Floating Point Summation

03/29/2022
by   Eric Hallman, et al.
0

We analyze the forward error in the floating point summation of real numbers, for computations in low precision or extreme-scale problem dimensions that push the limits of the precision. We present a systematic recurrence for a martingale on a computational tree, which leads to explicit and interpretable bounds without asymptotic big-O terms. Two probability parameters strengthen the precision-awareness of our bounds: one parameter controls the first order terms in the summation error, while the second one is designed for controlling higher order terms in low precision or extreme-scale problem dimensions. Our systematic approach yields new deterministic and probabilistic error bounds for three classes of mono-precision algorithms: general summation, shifted general summation, and compensated (sequential) summation. Extension of our systematic error analysis to mixed-precision summation algorithms that allow any number of precisions yields the first probabilistic bounds for the mixed-precision FABsum algorithm. Numerical experiments illustrate that the probabilistic bounds are accurate, and that among the three classes of mono-precision algorithms, compensated summation is generally the most accurate. As for mixed precision algorithms, our recommendation is to minimize the magnitude of intermediate partial sums relative to the precision in which they are computed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2021

Deterministic and Probabilistic Error Bounds for Floating Point Summation Algorithms

We analyse the forward error in the floating point summation of real num...
research
01/27/2021

Probabilistic Error Analysis For Sequential Summation of Real Floating Point Numbers

We derive two probabilistic bounds for the relative forward error in the...
research
06/01/2016

Profile-Driven Automated Mixed Precision

We present a scheme to automatically set the precision of floating point...
research
08/30/2018

Compensated de Casteljau algorithm in K times the working precision

In computer aided geometric design a polynomial is usually represented i...
research
05/11/2020

Computationally Inequivalent Summations and Their Parenthetic Forms

Floating-point addition on a finite-precision machine is not associative...
research
06/25/2019

Probabilistic Error Analysis for Inner Products

Probabilistic models are proposed for bounding the forward error in the ...
research
07/21/2022

Stochastic rounding variance and probabilistic bounds: A new approach *

Stochastic rounding (SR) offers an alternative to the deterministic IEEE...

Please sign up or login with your details

Forgot password? Click here to reset