# §3.5(i) Trapezoidal Rules

The elementary trapezoidal rule is given by

 3.5.1 $\int_{a}^{b}f(x)dx=\tfrac{1}{2}h(f(a)+f(b))-\tfrac{1}{12}h^{3}f^{\prime\prime}% (\xi),$ Symbols: $dx$: differential of $x$ and $\int$: integral A&S Ref: 25.4.1 (second relation only) Permalink: http://dlmf.nist.gov/3.5.E1 Encodings: TeX, pMML, png

where $h=b-a$, $f\in\mathop{C^{2}\/}\nolimits[a,b]$, and $a<\xi.

The composite trapezoidal rule is

 3.5.2 $\displaystyle\int_{a}^{b}f(x)dx=h(\tfrac{1}{2}f_{0}+f_{1}+\dots+f_{n-1}+\tfrac% {1}{2}f_{n})+E_{n}(f),$ Symbols: $dx$: differential of $x$, $\int$: integral and $E_{n}(f)$: error term A&S Ref: 25.4.2 (in slightly different form) Referenced by: §3.4(ii), §3.5(ix), §3.5(iii) Permalink: http://dlmf.nist.gov/3.5.E2 Encodings: TeX, pMML, png

where $h=(b-a)/n$, $x_{k}=a+kh$, $f_{k}=f(x_{k})$, $k=0,1,\dots,n$, and

 3.5.3 $E_{n}(f)=-\frac{b-a}{12}h^{2}f^{\prime\prime}(\xi),$ $a<\xi. Symbols: $E_{n}(f)$: error term A&S Ref: 25.4.2 (in slightly different form) Permalink: http://dlmf.nist.gov/3.5.E3 Encodings: TeX, pMML, png

If in addition $f$ is periodic, $f\in\mathop{C^{k}\/}\nolimits(\Real)$, and the integral is taken over a period, then

 3.5.4 $E_{n}(f)=\mathop{O\/}\nolimits\!\left(h^{k}\right),$ $h\to 0$. Symbols: $\mathop{O\/}\nolimits\!\left(x\right)$: order not exceeding and $E_{n}(f)$: error term A&S Ref: 25.4.3 (in slightly different form) Referenced by: §3.5(i) Permalink: http://dlmf.nist.gov/3.5.E4 Encodings: TeX, pMML, png

In particular, when $k=\infty$ the error term is an exponentially-small function of $1/h$, and in these circumstances the composite trapezoidal rule is exceptionally efficient. For an example see §3.5(ix).

Similar results hold for the trapezoidal rule in the form

 3.5.5 $\int_{-\infty}^{\infty}f(t)dt=h\sum_{k=-\infty}^{\infty}f(kh)+E_{h}(f),$ Symbols: $dx$: differential of $x$, $\int$: integral and $E_{n}(f)$: error term Referenced by: §3.5(i) Permalink: http://dlmf.nist.gov/3.5.E5 Encodings: TeX, pMML, png

with a function $f$ that is analytic in a strip containing $\Real$. For further information and examples, see Goodwin (1949b). In Stenger (1993, Chapter 3) the rule (3.5.5) is considered in the framework of Sinc approximations (§3.3(vi)). See also Poisson’s summation formula (§1.8(iv)).

If $k$ in (3.5.4) is not arbitrarily large, and if odd-order derivatives of $f$ are known at the end points $a$ and $b$, then the composite trapezoidal rule can be improved by means of the Euler–Maclaurin formula (§2.10(i)). See Davis and Rabinowitz (1984, pp. 134–142) and Temme (1996b, p. 25).

# §3.5(ii) Simpson’s Rule

Let $h=\frac{1}{2}(b-a)$ and $f\in\mathop{C^{4}\/}\nolimits[a,b]$. Then the elementary Simpson’s rule is

 3.5.6 $\int_{a}^{b}f(x)dx=\tfrac{1}{3}h(f(a)+4f(\tfrac{1}{2}(a+b))+f(b))-\tfrac{1}{90% }h^{5}f^{(4)}(\xi),$ Symbols: $dx$: differential of $x$ and $\int$: integral A&S Ref: 25.4.5 (second relation only) Permalink: http://dlmf.nist.gov/3.5.E6 Encodings: TeX, pMML, png

where $a<\xi.

Now let $h=(b-a)/n$, $x_{k}=a+kh$, and $f_{k}=f(x_{k})$, $k=0,1,\dots,n$. Then the composite Simpson’s rule is

 3.5.7 $\int_{a}^{b}f(x)dx=\tfrac{1}{3}h(f_{0}+4f_{1}+2f_{2}+4f_{3}+2f_{4}+\cdots+4f_{% n-1}+f_{n})+E_{n}(f),$ Symbols: $dx$: differential of $x$, $\int$: integral and $E_{n}(f)$: error term A&S Ref: 25.4.6 (in slightly different form) Permalink: http://dlmf.nist.gov/3.5.E7 Encodings: TeX, pMML, png

where $n$ is even and

 3.5.8 $E_{n}(f)=-\frac{b-a}{180}h^{4}f^{(4)}(\xi),$ $a<\xi. Symbols: $E_{n}(f)$: error term A&S Ref: 25.4.6 (in slightly different form) Permalink: http://dlmf.nist.gov/3.5.E8 Encodings: TeX, pMML, png

Simpson’s rule can be regarded as a combination of two trapezoidal rules, one with step size $h$ and one with step size $h/2$ to refine the error term.

# §3.5(iii) Romberg Integration

Further refinements are achieved by Romberg integration. If $f\in\mathop{C^{2m+2}\/}\nolimits[a,b]$, then the remainder $E_{n}(f)$ in (3.5.2) can be expanded in the form

 3.5.9 $E_{n}(f)=c_{1}h^{2}+c_{2}h^{4}+\dots+c_{m}h^{2m}+\mathop{O\/}\nolimits\!\left(% h^{2m+2}\right),$ Symbols: $\mathop{O\/}\nolimits\!\left(x\right)$: order not exceeding, $E_{n}(f)$: error term and $c_{m}$: coefficients Referenced by: §3.5(iii) Permalink: http://dlmf.nist.gov/3.5.E9 Encodings: TeX, pMML, png

where $h=(b-a)/n$. As in Simpson’s rule, by combining the rule for $h$ with that for $h/2$, the first error term $c_{1}h^{2}$ in (3.5.9) can be eliminated. With the Romberg scheme successive terms $c_{1}h^{2},c_{2}h^{4},\dots$, in (3.5.9) are eliminated, according to the formula

 3.5.10 $G_{k}(\tfrac{1}{2}h)=G_{k-1}(\tfrac{1}{2}h)+\frac{G_{k-1}(\frac{1}{2}h)-G_{k-1% }(h)}{4^{k}-1},$ $k\geq 1$, Symbols: $G_{k}(h)$: Romberg scheme Referenced by: §3.5(iii) Permalink: http://dlmf.nist.gov/3.5.E10 Encodings: TeX, pMML, png

beginning with

 3.5.11 $G_{0}(h)=h(\tfrac{1}{2}f_{0}+f_{1}+\dots+f_{n-1}+\tfrac{1}{2}f_{n}),$ Symbols: $G_{k}(h)$: Romberg scheme Permalink: http://dlmf.nist.gov/3.5.E11 Encodings: TeX, pMML, png

although we may also start with the elementary rule with $G_{0}(h)=\frac{1}{2}h(f(a)+f(b))$ and $h=b-a$. To generate $G_{k}(h)$ the quantities $G_{0}(h),G_{0}(h/2),\dots,G_{0}(h/2^{k})$ are needed. These can be found by means of the recursion

 3.5.12 $G_{0}(\tfrac{1}{2}h)=\tfrac{1}{2}G_{0}(h)+\tfrac{1}{2}h\sum_{k=0}^{n-1}f\left(% x_{0}+(k+\tfrac{1}{2})h\right),$ Symbols: $G_{k}(h)$: Romberg scheme Permalink: http://dlmf.nist.gov/3.5.E12 Encodings: TeX, pMML, png

which depends on function values computed previously.

If $f\in\mathop{C^{2k+2}\/}\nolimits(a,b)$, then for $j,k=0,1,\dots$,

 3.5.13 $\int_{a}^{b}f(x)dx-G_{k}\left(\frac{b-a}{2^{j}}\right)=-\frac{(b-a)^{2k+3}}{2^% {k(k+1)}}\frac{4^{-j(k+1)}}{(2k+2)!}\left|\mathop{B_{2k+2}\/}\nolimits\right|f% ^{(2k+2)}(\xi),$

for some $\xi\in(a,b)$. For the Bernoulli numbers $\mathop{B_{m}\/}\nolimits$ see §24.2(i).

When $f\in\mathop{C^{\infty}\/}\nolimits$, the Romberg method affords a means of obtaining high accuracy in many cases with a relatively simple adaptive algorithm. However, as illustrated by the next example, other methods may be more efficient.

# Example

With $\mathop{J_{0}\/}\nolimits\!\left(t\right)$ denoting the Bessel function (§10.2(ii)) the integral

 3.5.14 $\int_{0}^{\infty}e^{-pt}\mathop{J_{0}\/}\nolimits\!\left(t\right)dt=\frac{1}{% \sqrt{p^{2}+1}}$

is computed with $p=1$ on the interval $[0,30]$. Using (3.5.10) with $h=30/4=7.5$ we obtain $G_{7}(h)$ with 14 correct digits. About $2^{9}=512$ function evaluations are needed. (With the 20-point Gauss–Laguerre formula (§3.5(v)) the same precision can be achieved with 15 function evaluations.) With $j=2$ and $k=7$, the coefficient of the derivative $f^{(16)}(\xi)$ in (3.5.13) is found to be $(0.14\dots)\times 10^{-13}$.

See Davis and Rabinowitz (1984, pp. 440–441) for modifications of the Romberg method when the function $f$ is singular.

 3.5.15 $\int_{a}^{b}f(x)w(x)dx=\sum_{k=1}^{n}w_{k}f(x_{k})+E_{n}(f),$

with weight function $w(x)$, is one for which $E_{n}(f)=0$ whenever $f$ is a polynomial of degree $\leq n-1$. The nodes $x_{1},x_{2},\dots,x_{n}$ are prescribed, and the weights $w_{k}$ and error term $E_{n}(f)$ are found by integrating the product of the Lagrange interpolation polynomial of degree $n-1$ and $w(x)$.

If the extreme members of the set of nodes $x_{1},x_{2},\dots,x_{n}$ are the endpoints $a$ and $b$, then the quadrature rule is said to be closed. Or if the set $x_{1},x_{2},\dots,x_{n}$ lies in the open interval $(a,b)$, then the quadrature rule is said to be open.

Rules of closed type include the Newton–Cotes formulas such as the trapezoidal rules and Simpson’s rule. Examples of open rules are the Gauss formulas (§3.5(v)), the midpoint rule, and Fejér’s quadrature rule. For the latter $a=-1$, $b=1$, and the nodes $x_{k}$ are the extrema of the Chebyshev polynomial $\mathop{T_{n}\/}\nolimits\!\left(x\right)$3.11(ii) and §18.3). If we add $-1$ and $1$ to this set of $x_{k}$, then the resulting closed formula is the frequently-used Clenshaw–Curtis formula, whose weights are positive and given by

 3.5.16 $w_{k}=\frac{g_{k}}{n}\left(1-\sum_{j=1}^{\left\lfloor n/2\right\rfloor}\frac{b% _{j}}{4j^{2}-1}\mathop{\cos\/}\nolimits\!\left(2jk\pi/n\right)\right),$

where $x_{k}=\mathop{\cos\/}\nolimits\!\left(k\pi/n\right),k=0,1,\ldots,n$, and

 3.5.17 $g_{k}=\begin{cases}1,&\text{k=0,n},\\ 2,&\text{otherwise},\end{cases}\quad b_{j}=\begin{cases}1,&\text{j=\frac{1}{2% }n},\\ 2,&\text{otherwise}.\end{cases}$ Symbols: $g_{k}$: coefficient and $b_{k}$: coefficient Permalink: http://dlmf.nist.gov/3.5.E17 Encodings: TeX, pMML, png

For further information, see Mason and Handscomb (2003, Chapter 8), Davis and Rabinowitz (1984, pp. 74–92), and Clenshaw and Curtis (1960).

For detailed comparisons of the Clenshaw–Curtis formula with Gauss quadrature (§3.5(v)), see Trefethen (2008, 2011).

Let $\{p_{n}\}$ denote the set of monic polynomials $p_{n}$ of degree $n$ (coefficient of $x^{n}$ equal to $1$) that are orthogonal with respect to a positive weight function $w$ on a finite or infinite interval $(a,b)$; compare §18.2(i). In Gauss quadrature (also known as Gauss–Christoffel quadrature) we use (3.5.15) with nodes $x_{k}$ the zeros of $p_{n}$, and weights $w_{k}$ given by

 3.5.18 $w_{k}=\int_{a}^{b}\frac{p_{n}(x)}{(x-x_{k})p^{\mspace{1.0mu}\prime}_{n}(x_{k})% }\,w(x)dx.$

The $w_{k}$ are also known as Christoffel coefficients or Christoffel numbers and they are all positive. The remainder is given by

 3.5.19 $E_{n}(f)=\gamma_{n}f^{(2n)}(\xi)/(2n)!,$

where

 3.5.20 $\gamma_{n}=\int_{a}^{b}p_{n}^{2}(x)w(x)dx,$ Symbols: $dx$: differential of $x$, $\int$: integral, $\gamma_{n}$: coefficients, $w$: weight and $p_{n}$: set of monic polynomials A&S Ref: 25.4.29 (for general weight function) Referenced by: §3.5(vi) Permalink: http://dlmf.nist.gov/3.5.E20 Encodings: TeX, pMML, png

and $\xi$ is some point in $(a,b)$. As a consequence, the rule is exact for polynomials of degree $\leq 2n-1$.

In practical applications the weight function $w(x)$ is chosen to simulate the asymptotic behavior of the integrand as the endpoints are approached. For $\mathop{C^{\infty}\/}\nolimits$ functions Gauss quadrature can be very efficient. In adaptive algorithms the evaluation of the nodes and weights may cause difficulties, unless exact values are known.

For the derivation of Gauss quadrature formulas see Gautschi (2004, pp. 22–32), Gil et al. (2007a, §5.3), and Davis and Rabinowitz (1984, §§2.7 and 3.6). Stroud and Secrest (1966) includes computational methods and extensive tables. For further extensions, applications, and computation of orthogonal polynomials and Gauss-type formulas, see Gautschi (1994, 1996, 2004). For effective testing of Gaussian quadrature rules see Gautschi (1983).

For the classical orthogonal polynomials related to the following Gauss rules, see §18.3. The given quantities $\gamma_{n}$ follow from (18.2.5), (18.2.7), Table 18.3.1, and the relation $\gamma_{n}=\ifrac{h_{n}}{k_{n}^{2}}$.

# Gauss–Legendre Formula

 3.5.21 $\displaystyle[a,b]$ $\displaystyle=[-1,1],$ $\displaystyle w(x)$ $\displaystyle=1,$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{2^{2n+1}}{2n+1}\,\frac{(n!)^{4}}{((2n)!)^{2}}\,.$ Symbols: $[a,b]$: closed interval, $!$: $n!$: factorial, $\gamma_{n}$: coefficients and $w$: weight A&S Ref: 25.4.29 Permalink: http://dlmf.nist.gov/3.5.E21 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png

The nodes $x_{k}$ and weights $w_{k}$ for $n=5$, $10$ are shown in Tables 3.5.1 and 3.5.2. The $p_{n}(x)$ are the monic Legendre polynomials, that is, the polynomials $\mathop{P_{n}\/}\nolimits\!\left(x\right)$18.3) scaled so that the coefficient of the highest power of $x$ in their explicit forms is unity.

# Gauss–Chebyshev Formula

 3.5.22 $\displaystyle[a,b]$ $\displaystyle=[-1,1],$ $\displaystyle w(x)$ $\displaystyle=(1-x^{2})^{-1/2},$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{\pi}{2^{2n-1}}.$ Symbols: $[a,b]$: closed interval, $\gamma_{n}$: coefficients and $w$: weight Permalink: http://dlmf.nist.gov/3.5.E22 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png

The nodes $x_{k}$ and weights $w_{k}$ are known explicitly:

 3.5.23 $\displaystyle x_{k}$ $\displaystyle=\mathop{\cos\/}\nolimits\!\left(\frac{2k-1}{2n}\pi\right),$ $\displaystyle w_{k}$ $\displaystyle=\frac{\pi}{n}$, $k=1,2,\dots,n$. Symbols: $\mathop{\cos\/}\nolimits z$: cosine function and $w_{k}$: weights A&S Ref: 25.4.38 Permalink: http://dlmf.nist.gov/3.5.E23 Encodings: TeX, TeX, pMML, pMML, png, png

In the case of Chebyshev weight functions $w(x)=(1-x)^{\alpha}(1+x)^{\beta}$ on $[-1,1]$, with $\left|\alpha\right|=\left|\beta\right|=\frac{1}{2}$, the nodes $x_{k}$, weights $w_{k}$, and error constant $\gamma_{n}$, are as follows:

 3.5.24 $\displaystyle x_{k}$ $\displaystyle=\mathop{\cos\/}\nolimits\!\left(\frac{k}{n+1}\pi\right),$ $\displaystyle w_{k}$ $\displaystyle=\frac{\pi}{n+1}{\mathop{\sin\/}\nolimits^{2}}\!\left(\frac{k}{n+% 1}\pi\right),$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{\pi}{2^{2n+1}},$ $\alpha=\beta=\tfrac{1}{2}$, Symbols: $\mathop{\cos\/}\nolimits z$: cosine function, $\mathop{\sin\/}\nolimits z$: sine function, $\gamma_{n}$: coefficients, $\alpha$: exponent, $\beta$: exponent and $w_{k}$: weights A&S Ref: 25.4.40 (with corrected nodes and weights) Permalink: http://dlmf.nist.gov/3.5.E24 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png

and

 3.5.25 $\displaystyle x_{k}$ $\displaystyle=\pm\mathop{\cos\/}\nolimits\!\left(\frac{2k}{2n+1}\pi\right),$ $\displaystyle w_{k}$ $\displaystyle=\frac{4\pi}{2n+1}{\mathop{\sin\/}\nolimits^{2}}\!\left(\frac{k}{% 2n+1}\pi\right),$ $\displaystyle\gamma_{n}$ $\displaystyle=\frac{\pi}{2^{2n}},$ $\alpha=-\beta=\pm\tfrac{1}{2}$, Symbols: $\mathop{\cos\/}\nolimits z$: cosine function, $\mathop{\sin\/}\nolimits z$: sine function, $\gamma_{n}$: coefficients, $\alpha$: exponent, $\beta$: exponent and $w_{k}$: weights A&S Ref: 25.4.42 and 25.4.43 (in different form) Permalink: http://dlmf.nist.gov/3.5.E25 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png

where $k=1,2,\dots,n$.

# Gauss–Jacobi Formula

 3.5.26 $\displaystyle[a,b]$ $\displaystyle=[-1,1],$ $\displaystyle w(x)$ $\displaystyle=(1-x)^{\alpha}(1+x)^{\beta},$ $\displaystyle\gamma_{n}$ $\displaystyle=\dfrac{\mathop{\Gamma\/}\nolimits\!\left(n+\alpha+1\right)% \mathop{\Gamma\/}\nolimits\!\left(n+\beta+1\right)\mathop{\Gamma\/}\nolimits\!% \left(n+\alpha+\beta+1\right)}{(2n+\alpha+\beta+1)(\mathop{\Gamma\/}\nolimits% \!\left(2n+\alpha+\beta+1\right))^{2}}2^{2n+\alpha+\beta+1}n!,$ $\alpha>-1$, $\beta>-1$.

The $p_{n}(x)$ are the monic Jacobi polynomials $\mathop{P^{(\alpha,\beta)}_{n}\/}\nolimits\!\left(x\right)$18.3).

# Gauss–Laguerre Formula

 3.5.27 $\displaystyle[a,b)$ $\displaystyle=[0,\infty),$ $\displaystyle w(x)$ $\displaystyle=x^{\alpha}e^{-x},$ $\displaystyle\gamma_{n}$ $\displaystyle=n!\,\mathop{\Gamma\/}\nolimits\!\left(n+\alpha+1\right),$ $\alpha>-1$.

If $\alpha\neq 0$ this is called the generalized Gauss–Laguerre formula.

The nodes $x_{k}$ and weights $w_{k}$ for $\alpha=0$ and $n=5$, $10$ are shown in Tables 3.5.6 and 3.5.7. The $p_{n}(x)$ are the monic Laguerre polynomials $\mathop{L_{n}\/}\nolimits\!\left(x\right)$18.3).

# Gauss–Hermite Formula

 3.5.28 $\displaystyle(a,b)$ $\displaystyle=(-\infty,\infty),$ $\displaystyle w(x)$ $\displaystyle=e^{-x^{2}},$ $\displaystyle\gamma_{n}$ $\displaystyle=\sqrt{\pi}\,\frac{n!}{2^{n}}.$

The nodes $x_{k}$ and weights $w_{k}$ for $n=5,10$ are shown in Tables 3.5.10 and 3.5.11. The $p_{n}(x)$ are the monic Hermite polynomials $\mathop{H_{n}\/}\nolimits\!\left(x\right)$18.3).

# Gauss Formula for a Logarithmic Weight Function

 3.5.29 $\displaystyle[a,b]$ $\displaystyle=[0,1],$ $\displaystyle w(x)$ $\displaystyle=\mathop{\ln\/}\nolimits\!\left(1/x\right).$ Symbols: $[a,b]$: closed interval, $\mathop{\ln\/}\nolimits z$: principal branch of logarithm function and $w$: weight A&S Ref: 25.4.44 (modification of) Permalink: http://dlmf.nist.gov/3.5.E29 Encodings: TeX, TeX, pMML, pMML, png, png

The nodes $x_{k}$ and weights $w_{k}$ for $n=5,10$ are shown in Tables 3.5.14 and 3.5.15.

# §3.5(vi) Eigenvalue/Eigenvector Characterization of Gauss Quadrature Formulas

All the monic orthogonal polynomials $\{p_{n}\}$ used with Gauss quadrature satisfy a three-term recurrence relation (§18.2(iv)):

 3.5.30 $p_{k+1}(x)=(x-\alpha_{k})p_{k}(x)-\beta_{k}p_{k-1}(x),$ $k=0,1,\dots$, Symbols: $p_{n}$: set of monic polynomials A&S Ref: 22.1.4 (for monic polynomials) Permalink: http://dlmf.nist.gov/3.5.E30 Encodings: TeX, pMML, png

with $\beta_{k}>0$, $p_{-1}(x)=0$, and $p_{0}(x)=1$.

The Gauss nodes $x_{k}$ (the zeros of $p_{n}$) are the eigenvalues of the (symmetric tridiagonal) Jacobi matrix of order $n\times n$:

 3.5.31 $\mathbf{J}_{n}=\begin{bmatrix}\alpha_{0}&\sqrt{\beta_{1}}&&&0\\ \sqrt{\beta_{1}}&\alpha_{1}&\sqrt{\beta_{2}}&&\\ &\ddots&\ddots&\ddots&\\ &&\sqrt{\beta_{n-2}}&\alpha_{n-2}&\sqrt{\beta_{n-1}}\\ 0&&&\sqrt{\beta_{n-1}}&\alpha_{n-1}\end{bmatrix}.$ Permalink: http://dlmf.nist.gov/3.5.E31 Encodings: TeX, pMML, png

Let $\mathbf{v}_{k}$ denote the normalized eigenvector of $\mathbf{J}_{n}$ corresponding to the eigenvalue $x_{k}$. Then the weights are given by

 3.5.32 $w_{k}=\beta_{0}v_{k,1}^{2},$ $k=1,2,\dots,n$, Symbols: $\mathbf{v}_{k}$: normalized eigenvector and $w_{k}$: weights Permalink: http://dlmf.nist.gov/3.5.E32 Encodings: TeX, pMML, png

where $\beta_{0}=\int_{a}^{b}w(x)dx$ and $v_{k,1}$ is the first element of $\mathbf{v}_{k}$. Also, the error constant (3.5.20) is given by

 3.5.33 $\gamma_{n}=\beta_{0}\beta_{1}\cdots\beta_{n}.$ Symbols: $\gamma_{n}$: coefficients Permalink: http://dlmf.nist.gov/3.5.E33 Encodings: TeX, pMML, png

Tables 3.5.13.5.13 can be verified by application of the results given in the present subsection. In these cases the coefficients $\alpha_{k}$ and $\beta_{k}$ are obtainable explicitly from results given in §18.9(i).

# §3.5(vii) Oscillatory Integrals

Integrals of the form

 3.5.34 $\int_{a}^{b}f(x)\mathop{\cos\/}\nolimits\!\left(\omega x\right)dx,$ $\int_{a}^{b}f(x)\mathop{\sin\/}\nolimits\!\left(\omega x\right)dx,$

can be computed by Filon’s rule. See Davis and Rabinowitz (1984, pp. 146–168).

Oscillatory integral transforms are treated in Wong (1982) by a method based on Gaussian quadrature. A comparison of several methods, including an extension of the Clenshaw–Curtis formula (§3.5(iv)), is given in Evans and Webster (1999).

For computing infinite oscillatory integrals, Longman’s method may be used. The integral is written as an alternating series of positive and negative subintegrals that are computed individually; see Longman (1956). Convergence acceleration schemes, for example Levin’s transformation (§3.9(v)), can be used when evaluating the series. Further methods are given in Clendenin (1966) and Lyness (1985).

For a comprehensive survey of quadrature of highly oscillatory integrals, including multidimensional integrals, see Iserles et al. (2006).

For the Bromwich integral

 3.5.35 $I(f)=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}e^{\zeta}\zeta^{-s}f(\zeta)d\zeta,$ $s>0$, $c>c_{0}>0$, Symbols: $dx$: differential of $x$, $e$: base of exponential function, $\int$: integral and $I(f)$: Bromwich integral Referenced by: §3.5(viii), §3.5(ix), §3.5(viii) Permalink: http://dlmf.nist.gov/3.5.E35 Encodings: TeX, pMML, png

a complex Gauss quadrature formula is available. Here $f(\zeta)$ is assumed analytic in the half-plane $\realpart{\zeta}>c_{0}$ and bounded as $\zeta\to\infty$ in $\left|\mathop{\mathrm{ph}\/}\nolimits\zeta\right|\leq\frac{1}{2}\pi$. The quadrature rule for (3.5.35) is

 3.5.36 $I(f)=\sum_{k=1}^{n}w_{k}f(\zeta_{k})+E_{n}(f),$

where $E_{n}(f)=0$ if $f(\zeta)$ is a polynomial of degree $\leq 2n-1$ in $1/\zeta$. Complex orthogonal polynomials $p_{n}(1/\zeta)$ of degree $n=0,1,2,\dots$, in $1/\zeta$ that satisfy the orthogonality condition

 3.5.37 $\int_{c-i\infty}^{c+i\infty}e^{\zeta}\zeta^{-s}p_{k}(1/\zeta)p_{\ell}(1/\zeta)% d\zeta=0,$ $k\neq\ell$,

are related to Bessel polynomials (§§10.49(ii) and 18.34). The complex Gauss nodes $\zeta_{k}$ have positive real part for all $s>0$.

The nodes and weights of the 5-point complex Gauss quadrature formula (3.5.36) for $s=1$ are shown in Table 3.5.18. Extensive tables of quadrature nodes and weights can be found in Krylov and Skoblya (1985).

# Example. Laplace Transform Inversion

From §1.14(iii)

 3.5.38 $G(p)=\int_{0}^{\infty}e^{-pt}g(t)dt,$ Symbols: $dx$: differential of $x$, $e$: base of exponential function, $\int$: integral, $g(t)$: function and $G(p)$: Laplace transform of $g(t)$ A&S Ref: 29.2.1 (in different form) Permalink: http://dlmf.nist.gov/3.5.E38 Encodings: TeX, pMML, png
 3.5.39 $g(t)=\frac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}e^{tp}G(p)dp,$

with appropriate conditions. The pair

 3.5.40 $\displaystyle g(t)$ $\displaystyle=\mathop{J_{0}\/}\nolimits\!\left(t\right),$ $\displaystyle G(p)$ $\displaystyle=\frac{1}{\sqrt{p^{2}+1}},$

where $\mathop{J_{0}\/}\nolimits\!\left(t\right)$ is the Bessel function (§10.2(ii)), satisfy these conditions, provided that $\sigma>0$. The integral (3.5.39) has the form (3.5.35) if we set $\zeta=tp$, $c=t\sigma$, and $f(\zeta)=t^{-1}\zeta^{s}G(\zeta/t)$. We choose $s=1$ so that $f(\zeta)=\mathop{O\/}\nolimits\!\left(1\right)$ at infinity. Equation (3.5.36), without the error term, becomes

 3.5.41 $g(t)=\sum_{k=1}^{n}\frac{w_{k}\zeta_{k}}{\sqrt{\zeta_{k}^{2}+t^{2}}},$

approximately.

Using Table 3.5.18 we compute $g(t)$ for $n=5$. The results are given in the middle column of Table 3.5.19, accompanied by the actual 10D values in the last column. Agreement is very good for small values of $t$, but not for larger values. For these cases the integration path may need to be deformed; see §3.5(ix).

# §3.5(ix) Other Contour Integrals

A frequent problem with contour integrals is heavy cancellation, which occurs especially when the value of the integral is exponentially small compared with the maximum absolute value of the integrand. To avoid cancellation we try to deform the path to pass through a saddle point in such a way that the maximum contribution of the integrand is derived from the neighborhood of the saddle point. For example, steepest descent paths can be used; see §2.4(iv).

# Example

In (3.5.35) take $s=1$ and $f(\zeta)=e^{-2\lambda\sqrt{\zeta}}$, with $\lambda>0$. When $\lambda$ is large the integral becomes exponentially small, and application of the quadrature rule of §3.5(viii) is useless. In fact from (7.14.4) and the inversion formula for the Laplace transform (§1.14(iii)) we have

 3.5.42 $\mathop{\mathrm{erfc}\/}\nolimits\lambda=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i% \infty}e^{\zeta-2\lambda\sqrt{\zeta}}\frac{d\zeta}{\zeta},$ $c>0$, Symbols: $\mathop{\mathrm{erfc}\/}\nolimits z$: complementary error function, $dx$: differential of $x$, $e$: base of exponential function, $\int$: integral and $\lambda$: parameter A&S Ref: 29.3.83 (in different form) Referenced by: §3.5(ix) Permalink: http://dlmf.nist.gov/3.5.E42 Encodings: TeX, pMML, png

where $\mathop{\mathrm{erfc}\/}\nolimits z$ is the complementary error function, and from (7.12.1) it follows that

 3.5.43 $\mathop{\mathrm{erfc}\/}\nolimits\lambda\sim\frac{e^{-\lambda^{2}}}{\sqrt{\pi}% \lambda},$ $\lambda\to\infty$.

With the transformation $\zeta=\lambda^{2}t$, (3.5.42) becomes

 3.5.44 $\mathop{\mathrm{erfc}\/}\nolimits\lambda=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i% \infty}e^{\lambda^{2}(t-2\sqrt{t})}\frac{dt}{t},$ $c>0$,

with saddle point at $t=1$, and when $c=1$ the vertical path intersects the real axis at the saddle point. The steepest descent path is given by $\imagpart{(t-2\sqrt{t})}=0$, or in polar coordinates $t=re^{i\theta}$ we have $r={\mathop{\sec\/}\nolimits^{2}}\!\left(\frac{1}{2}\theta\right)$. Thus

 3.5.45 $\mathop{\mathrm{erfc}\/}\nolimits\lambda=\frac{e^{-\lambda^{2}}}{2\pi}\int_{-% \pi}^{\pi}e^{-\lambda^{2}{\mathop{\tan\/}\nolimits^{2}}\!\left(\frac{1}{2}% \theta\right)}d\theta.$

The integrand can be extended as a periodic $\mathop{C^{\infty}\/}\nolimits$ function on $\Real$ with period $2\pi$ and as noted in §3.5(i), the trapezoidal rule is exceptionally efficient in this case.

Table 3.5.20 gives the results of applying the composite trapezoidal rule (3.5.2) with step size $h$; $n$ indicates the number of function values in the rule that are larger than $10^{-15}$ (we exploit the fact that the integrand is even). All digits shown in the approximation in the final row are correct.

A second example is provided in Gil et al. (2001), where the method of contour integration is used to evaluate Scorer functions of complex argument (§9.12). See also Gil et al. (2003b).

If $f$ is meromorphic, with poles near the saddle point, then the foregoing method can be modified. A special case is the rule for Hilbert transforms (§1.14(v)):

 3.5.46 $\mathop{\mathcal{H}\/}\nolimits\left(f;x\right)=\frac{1}{\pi}\pvint_{-\infty}^% {\infty}\frac{f(t)}{t-x}dt,$ $x\in\Real$,

where the integral is the Cauchy principal value. See Kress and Martensen (1970).

Other contour integrals occur in standard integral transforms or their inverses, for example, Hankel transforms (§10.22(v)), Kontorovich–Lebedev transforms (§10.43(v)), and Mellin transforms (§1.14(iv)).

# §3.5(x) Cubature Formulas

Table 3.5.21 supplies cubature rules, including weights $w_{j}$, for the disk $D$, given by $x^{2}+y^{2}\leq h^{2}$:

 3.5.47 $\frac{1}{\pi h^{2}}\iint_{D}f(x,y)dxdy=\sum_{j=1}^{n}w_{j}f(x_{j},y_{j})+R,$

and the square $S$, given by $\left|x\right|\leq h$, $\left|y\right|\leq h$:

 3.5.48 $\frac{1}{4h^{2}}\iint_{S}f(x,y)dxdy=\sum_{j=1}^{n}w_{j}f(x_{j},y_{j})+R.$

For these results and further information on cubature formulas see Cools (2003).

For integrals in higher dimensions, Monte Carlo methods are another—often the only—alternative. The standard Monte Carlo method samples points uniformly from the integration region to estimate the integral and its error. In more advanced methods points are sampled from a probability distribution, so that they are concentrated in regions that make the largest contribution to the integral. With $N$ function values, the Monte Carlo method aims at an error of order $1/\sqrt{N}$, independently of the dimension of the domain of integration. See Davis and Rabinowitz (1984, pp. 384–417) and Schürer (2004).