# §3.1(i) Floating-Point Arithmetic

Computer arithmetic is described for the binary based system with base 2; another frequently used system is the hexadecimal system with base 16.

A nonzero normalized binary floating-point machine number $x$ is represented as

 3.1.1 $x=(-1)^{s}\cdot(b_{0}.b_{1}b_{2}\dots b_{p-1})\cdot 2^{E},$ $b_{0}=1$,

where $s$ is equal to $1$ or $0$, each $b_{j}$, $j\geq 1$, is either $0$ or $1$, $b_{1}$ is the most significant bit, $p$ ($\in\NatNumber$) is the number of significant bits $b_{j}$, $b_{p-1}$ is the least significant bit, $E$ is an integer called the exponent, $b_{0}.b_{1}b_{2}\dots b_{p-1}$ is the significand, and $f=.b_{1}b_{2}\dots b_{p-1}$ is the fractional part.

The set of machine numbers $\Real_{{\rm fl}}$ is the union of $0$ and the set

 3.1.2 $(-1)^{s}2^{E}\sum_{j=0}^{p-1}b_{j}2^{-j},$

with $b_{0}=1$ and all allowable choices of $E$, $p$, $s$, and $b_{j}$.

Let $E_{{\rm min}}\leq E\leq E_{{\rm max}}$ with $E_{{\rm min}}<0$ and $E_{{\rm max}}>0$. For given values of $E_{{\rm min}}$, $E_{{\rm max}}$, and $p$, the format width in bits $N$ of a computer word is the total number of bits: the sign (one bit), the significant bits $b_{1},b_{2},\dots,b_{p-1}$ ($p-1$ bits), and the bits allocated to the exponent (the remaining $N-p$ bits). The integers $p$, $E_{{\rm min}}$, and $E_{{\rm max}}$ are characteristics of the machine. The machine epsilon $\epsilon_{M}$, that is, the distance between $1$ and the next larger machine number with $E=0$ is given by $\epsilon_{M}=2^{-p+1}$. The machine precision is $\frac{1}{2}\epsilon_{M}=2^{-p}$. The lower and upper bounds for the absolute values of the nonzero machine numbers are given by

 3.1.3 $N_{{\rm min}}\equiv 2^{E_{{\rm min}}}\leq|x|\leq 2^{E_{{\rm max}}+1}\left(1-2^% {-p}\right)\equiv N_{{\rm max}}.$ Symbols: $N_{{\rm min}}$: underflow, $N_{{\rm max}}$: overflow, $p$: number of significant bits, $E_{{\rm min}}$: minimum exponent and $E_{{\rm max}}$: maximum exponent Permalink: http://dlmf.nist.gov/3.1.E3 Encodings: TeX, pMML, png

Underflow (overflow) after computing $x\neq 0$ occurs when $|x|$ is smaller (larger) than $N_{{\rm min}}$ ($N_{{\rm max}}$).

# IEEE Standard

The current standard is the ANSI/IEEE Standard 754; see IEEE (1985, §§1–4). In the case of normalized binary representation the memory positions for single precision ($N=32$, $p=24$, $E_{{\rm min}}=-126$, $E_{{\rm max}}=127$) and double precision ($N=64$, $p=53$, $E_{{\rm min}}=-1022$, $E_{{\rm max}}=1023$) are as in Figure 3.1.1. The respective machine precisions are $\frac{1}{2}\epsilon_{M}=0.596\times 10^{-7}$ and $\frac{1}{2}\epsilon_{M}=0.111\times 10^{-15}$.

# Rounding

Let $x$ be any positive number with

 3.1.4 $x=(1.b_{1}b_{2}\dots b_{p-1}b_{p}b_{p+1}\dots)\cdot 2^{E},$

$N_{{\rm min}}\leq x\leq N_{{\rm max}}$, and

 3.1.5 $\displaystyle x_{-}$ $\displaystyle=(1.b_{1}b_{2}\dots b_{p-1})\cdot 2^{E},$ $\displaystyle x_{+}$ $\displaystyle=((1.b_{1}b_{2}\dots b_{p-1})+\epsilon_{M})\cdot 2^{E}.$

Then rounding by chopping or rounding down of $x$ gives $x_{-}$, with maximum relative error $\epsilon_{M}$. Symmetric rounding or rounding to nearest of $x$ gives $x_{-}$ or $x_{+}$, whichever is nearer to $x$, with maximum relative error equal to the machine precision $\frac{1}{2}\epsilon_{M}=2^{-p}$.

Negative numbers $x$ are rounded in the same way as $-x$.

For further information see Goldberg (1991) and Overton (2001).

# §3.1(ii) Interval Arithmetic

Interval arithmetic is intended for bounding the total effect of rounding errors of calculations with machine numbers. With this arithmetic the computed result can be proved to lie in a certain interval, which leads to validated computing with guaranteed and rigorous inclusion regions for the results.

Let $G$ be the set of closed intervals $\{[a,b]\}$. The elementary arithmetical operations on intervals are defined as follows:

 3.1.6 $I*J=\{x*y\,|\,x\in I,y\in J\},$ $I,J\in G$, Symbols: $\in$: element of and $G$: set of closed intervals Permalink: http://dlmf.nist.gov/3.1.E6 Encodings: TeX, pMML, png

where $*\in\{+,-,\cdot,/\}$, with appropriate roundings of the end points of $I*J$ when machine numbers are being used. Division is possible only if the divisor interval does not contain zero.

A basic text on interval arithmetic and analysis is Alefeld and Herzberger (1983), and for applications and further information see Moore (1979) and Petković and Petković (1998). The last reference includes analogs for arithmetic in the complex plane $\Complex$.

# §3.1(iii) Rational Arithmetics

Computer algebra systems use exact rational arithmetic with rational numbers $p/q$, where $p$ and $q$ are multi-length integers. During the calculations common divisors are removed from the rational numbers, and the final results can be converted to decimal representations of arbitrary length. For further information see Matula and Kornerup (1980).

# §3.1(iv) Level-Index Arithmetic

To eliminate overflow or underflow in finite-precision arithmetic numbers are represented by using generalized logarithms $\mathop{\ln\/}\nolimits_{\ell}(x)$ given by

 3.1.7 $\displaystyle\mathop{\ln\/}\nolimits_{0}(x)$ $\displaystyle=x,$ $\displaystyle\mathop{\ln\/}\nolimits_{\ell}(x)$ $\displaystyle=\mathop{\ln\/}\nolimits\!\left(\mathop{\ln\/}\nolimits_{\ell-1}(% x)\right),$ $\ell=1,2,\dots$, Symbols: $\mathop{\ln\/}\nolimits z$: principal branch of logarithm function and $\ell$: base Permalink: http://dlmf.nist.gov/3.1.E7 Encodings: TeX, TeX, pMML, pMML, png, png

with $x\geq 0$ and $\ell$ the unique nonnegative integer such that $a\equiv\mathop{\ln\/}\nolimits_{\ell}(x)\in[0,1)$. In level-index arithmetic $x$ is represented by $\ell+a$ (or $-(\ell+a)$ for negative numbers). Also in this arithmetic generalized precision can be defined, which includes absolute error and relative precision (§3.1(v)) as special cases.

For further information see Clenshaw and Olver (1984) and Clenshaw et al. (1989). For applications see Lozier (1993).

For further references on level-index arithmetic (and also other arithmetics) see Anuta et al. (1996). See also Hayes (2009).

# §3.1(v) Error Measures

If $x^{*}$ is an approximation to a real or complex number $x$, then the absolute error is

 3.1.8 $\epsilon_{a}=\left|x^{*}-x\right|.$ Symbols: $\epsilon_{a}$: absolute error A&S Ref: 3.5.1 (in absolute value) Permalink: http://dlmf.nist.gov/3.1.E8 Encodings: TeX, pMML, png

If $x\neq 0$, the relative error is

 3.1.9 $\epsilon_{r}=\left|\frac{x^{*}-x}{x}\right|=\frac{\epsilon_{a}}{\left|x\right|}.$ Symbols: $\epsilon_{a}$: absolute error and $\epsilon_{r}$: relative error A&S Ref: 3.5.2 (in absolute value) Permalink: http://dlmf.nist.gov/3.1.E9 Encodings: TeX, pMML, png

The relative precision is

 3.1.10 $\epsilon_{\mathit{rp}}=\left|\mathop{\ln\/}\nolimits\!\left(\ifrac{x^{*}}{x}% \right)\right|,$ Symbols: $\mathop{\ln\/}\nolimits z$: principal branch of logarithm function and $\epsilon_{\mathit{rp}}$: relative precision Permalink: http://dlmf.nist.gov/3.1.E10 Encodings: TeX, pMML, png

where $xx^{*}>0$ for real variables, and $xx^{*}\neq 0$ for complex variables (with the principal value of the logarithm).

The mollified error is

 3.1.11 $\epsilon_{m}=\frac{\left|x^{*}-x\right|}{\max(\left|x\right|,1)}.$ Symbols: $(a,b)$: open interval and $\epsilon_{m}$: molified error Permalink: http://dlmf.nist.gov/3.1.E11 Encodings: TeX, pMML, png

For error measures for complex arithmetic see Olver (1983).