- §2.11(i) Numerical Use of Asymptotic Expansions
- §2.11(ii) Connection Formulas
- §2.11(iii) Exponentially-Improved Expansions
- §2.11(iv) Stokes Phenomenon
- §2.11(v) Exponentially-Improved Expansions (continued)
- §2.11(vi) Direct Numerical Transformations

When a rigorous bound or reliable estimate for the remainder term is unavailable, it is unsafe to judge the accuracy of an asymptotic expansion merely from the numerical rate of decrease of the terms at the point of truncation. Even when the series converges this is unwise: the tail needs to be majorized rigorously before the result can be guaranteed. For divergent expansions the situation is even more difficult. First, it is impossible to bound the tail by majorizing its terms. Secondly, the asymptotic series represents an infinite class of functions, and the remainder depends on which member we have in mind.

As an example consider

2.11.1 | $$I(m)={\int}_{0}^{\pi}\frac{\mathrm{cos}\left(mt\right)}{{t}^{2}+1}dt,$$ | ||

with $m$ a large integer. By integration by parts (§2.3(i))

2.11.2 | $$I(m)\sim {(-1)}^{m}\sum _{s=1}^{\mathrm{\infty}}\frac{{q}_{s}(\pi )}{{m}^{2s}},$$ | ||

$m\to \mathrm{\infty}$, | |||

with

2.11.3 | ${q}_{1}(t)$ | $=-{\displaystyle \frac{2t}{{({t}^{2}+1)}^{2}}},$ | ||

${q}_{2}(t)$ | $={\displaystyle \frac{24({t}^{3}-t)}{{({t}^{2}+1)}^{4}}},$ | |||

${q}_{3}(t)$ | $=-{\displaystyle \frac{240(3{t}^{5}-10{t}^{3}+3t)}{{({t}^{2}+1)}^{6}}}.$ | |||

On rounding to 5D, we have ${q}_{1}(\pi )=-0.05318$, ${q}_{2}(\pi )=0.04791$, ${q}_{3}(\pi )=-0.08985$. Hence

2.11.4 | $$I(10)\sim -\mathrm{0.00053\hspace{0.33em}18}+\mathrm{0.00000\hspace{0.33em}48}-\mathrm{0.00000\hspace{0.33em}01}=-\mathrm{0.00052\hspace{0.33em}71}.$$ | ||

But this answer is incorrect: to 7D $I(10)=-\mathrm{0.00045\hspace{0.33em}58}$. The error term is, in fact, approximately 700 times the last term obtained in (2.11.4). The explanation is that (2.11.2) is a more accurate expansion for the function $I(m)-\frac{1}{2}\pi {\mathrm{e}}^{-m}$ than it is for $I(m)$; see Olver (1997b, pp. 76–78).

In order to guard against this kind of error remaining undetected, the wanted
function may need to be computed by another method (preferably nonasymptotic)
for the smallest value of the (large) asymptotic variable $x$ that is intended
to be used. If the results agree within $S$ significant figures, then it is
likely—*but not certain*—that the truncated asymptotic series will
yield at least $S$ correct significant figures for larger values of $x$. For
further discussion see Bosley (1996).

In $\mathrm{\u2102}$ both the modulus and phase of the asymptotic variable $z$ need to be taken into account. Suppose an asymptotic expansion holds as $z\to \mathrm{\infty}$ in any closed sector within $$, say, but not in $\alpha \le \mathrm{ph}z\le \beta $. Then numerical accuracy will disintegrate as the boundary rays $\mathrm{ph}z=\alpha $, $\mathrm{ph}z=\beta $ are approached. In consequence, practical application needs to be confined to a sector ${\alpha}^{\prime}\le \mathrm{ph}z\le {\beta}^{\prime}$ well within the sector of validity, and independent evaluations carried out on the boundaries for the smallest value of $|z|$ intended to be used. The choice of ${\alpha}^{\prime}$ and ${\beta}^{\prime}$ is facilitated by a knowledge of the relevant Stokes lines; see §2.11(iv) below.

However, regardless whether we can bound the remainder, the accuracy achievable by direct numerical summation of a divergent asymptotic series is always limited. The rest of this section is devoted to general methods for increasing this accuracy.

From §8.19(i) the generalized exponential integral is given by

2.11.5 | $${E}_{p}\left(z\right)=\frac{{\mathrm{e}}^{-z}{z}^{p-1}}{\mathrm{\Gamma}\left(p\right)}{\int}_{0}^{\mathrm{\infty}}\frac{{\mathrm{e}}^{-zt}{t}^{p-1}}{1+t}dt$$ | ||

when $\mathrm{\Re}p>0$ and $$, and by analytic continuation for other values of $p$ and $z$. Application of Watson’s lemma (§2.4(i)) yields

2.11.6 | $${E}_{p}\left(z\right)\sim \frac{{\mathrm{e}}^{-z}}{z}\sum _{s=0}^{\mathrm{\infty}}{(-1)}^{s}\frac{{\left(p\right)}_{s}}{{z}^{s}}$$ | ||

when $p$ is fixed and $z\to \mathrm{\infty}$ in any closed sector within $$. As noted in §2.11(i), poor accuracy is yielded by this expansion as $\mathrm{ph}z$ approaches $\frac{3}{2}\pi $ or $-\frac{3}{2}\pi $. However, on combining (2.11.6) with the connection formula (8.19.18), with $m=1$, we derive

2.11.7 | $${E}_{p}\left(z\right)\sim \frac{2\pi \mathrm{i}{\mathrm{e}}^{-p\pi \mathrm{i}}}{\mathrm{\Gamma}\left(p\right)}{z}^{p-1}+\frac{{\mathrm{e}}^{-z}}{z}\sum _{s=0}^{\mathrm{\infty}}{(-1)}^{s}\frac{{\left(p\right)}_{s}}{{z}^{s}},$$ | ||

valid as $z\to \mathrm{\infty}$ in any closed sector within $$; compare (8.20.3). Since the ray $\mathrm{ph}z=\frac{3}{2}\pi $ is well away from the new boundaries, the compound expansion (2.11.7) yields much more accurate results when $\mathrm{ph}z\to \frac{3}{2}\pi $. In effect, (2.11.7) “corrects” (2.11.6) by introducing a term that is relatively exponentially small in the neighborhood of $\mathrm{ph}z=\pi $, is increasingly significant as $\mathrm{ph}z$ passes from $\pi $ to $\frac{3}{2}\pi $, and becomes the dominant contribution after $\mathrm{ph}z$ passes $\frac{3}{2}\pi $. See also §2.11(iv).

The procedure followed in §2.11(ii) enabled ${E}_{p}\left(z\right)$ to be computed with as much accuracy in the sector $\pi \le \mathrm{ph}z\le 3\pi $ as the original expansion (2.11.6) in $|\mathrm{ph}z|\le \pi $. We now increase substantially the accuracy of (2.11.6) in $|\mathrm{ph}z|\le \pi $ by re-expanding the remainder term.

Optimum truncation in (2.11.6) takes place at $s=n-1$, with $|p+n-1|=|z|$, approximately. Thus

2.11.8 | $$n=\rho -p+\alpha ,$$ | ||

where $z=\rho {\mathrm{e}}^{\mathrm{i}\theta}$, and $|\alpha |$ is bounded as $n\to \mathrm{\infty}$. From (2.11.5) and the identity

2.11.9 | $$\frac{1}{1+t}=\sum _{s=0}^{n-1}{(-1)}^{s}{t}^{s}+{(-1)}^{n}\frac{{t}^{n}}{1+t},$$ | ||

$t\ne -1$, | |||

we have

2.11.10 | $${E}_{p}\left(z\right)=\frac{{\mathrm{e}}^{-z}}{z}\sum _{s=0}^{n-1}{(-1)}^{s}\frac{{\left(p\right)}_{s}}{{z}^{s}}+{(-1)}^{n}\frac{2\pi}{\mathrm{\Gamma}\left(p\right)}{z}^{p-1}{F}_{n+p}\left(z\right),$$ | ||

where

2.11.11 | $${F}_{n+p}\left(z\right)=\frac{{\mathrm{e}}^{-z}}{2\pi}{\int}_{0}^{\mathrm{\infty}}\frac{{\mathrm{e}}^{-zt}{t}^{n+p-1}}{1+t}dt=\frac{\mathrm{\Gamma}\left(n+p\right)}{2\pi}\frac{{E}_{n+p}\left(z\right)}{{z}^{n+p-1}}.$$ | ||

With $n$ given by (2.11.8), we have

2.11.12 | $${F}_{n+p}\left(z\right)=\frac{{\mathrm{e}}^{-z}}{2\pi}{\int}_{0}^{\mathrm{\infty}}\mathrm{exp}\left(-\rho \left(t{\mathrm{e}}^{\mathrm{i}\theta}-\mathrm{ln}t\right)\right)\frac{{t}^{\alpha -1}}{1+t}dt.$$ | ||

For large $\rho $ the integrand has a saddle point at $t={\mathrm{e}}^{-\mathrm{i}\theta}$. Following §2.4(iv), we rotate the integration path through an angle $-\theta $, which is valid by analytic continuation when $$. Then by application of Laplace’s method (§§2.4(iii) and 2.4(iv)), we have

2.11.13 | $${F}_{n+p}\left(z\right)\sim \frac{{\mathrm{e}}^{-\mathrm{i}(\rho +\alpha )\theta}}{1+{\mathrm{e}}^{-\mathrm{i}\theta}}\frac{{\mathrm{e}}^{-\rho -z}}{{(2\pi \rho )}^{1/2}}\sum _{s=0}^{\mathrm{\infty}}\frac{{a}_{2s}(\theta ,\alpha )}{{\rho}^{s}},$$ | ||

$\rho \to \mathrm{\infty}$, | |||

uniformly when $\theta \in [-\pi +\delta ,\pi -\delta ]$ ($\delta >0$) and $|\alpha |$ is bounded. The coefficients are rational functions of $\alpha $ and $1+{\mathrm{e}}^{\mathrm{i}\theta}$, for example, ${a}_{0}(\theta ,\alpha )=1$, and

2.11.14 | $${a}_{2}(\theta ,\alpha )=\frac{1}{12}(6{\alpha}^{2}-6\alpha +1)-\frac{\alpha}{1+{\mathrm{e}}^{\mathrm{i}\theta}}+\frac{1}{{(1+{\mathrm{e}}^{\mathrm{i}\theta})}^{2}}.$$ | ||

Owing to the factor ${\mathrm{e}}^{-\rho}$, that is, ${\mathrm{e}}^{-|z|}$ in (2.11.13),
${F}_{n+p}\left(z\right)$ is uniformly exponentially small compared with ${E}_{p}\left(z\right)$.
For this reason the expansion of ${E}_{p}\left(z\right)$ in $|\mathrm{ph}z|\le \pi -\delta $ supplied by (2.11.8), (2.11.10), and
(2.11.13) is said to be *exponentially improved*.

If we permit the use of nonelementary functions as approximants, then even more powerful re-expansions become available. One is uniformly valid for $-\pi +\delta \le \mathrm{ph}z\le 3\pi -\delta $ with bounded $|\alpha |$, and achieves uniform exponential improvement throughout $0\le \mathrm{ph}z\le \pi $:

2.11.15 | $${F}_{n+p}\left(z\right)\sim {(-1)}^{n}\mathrm{i}{\mathrm{e}}^{-p\pi \mathrm{i}}(\begin{array}{l}\frac{1}{2}\mathrm{erfc}\left(\sqrt{\frac{1}{2}\rho}c(\theta )\right)\\ \phantom{\rule{2em}{0ex}}-\mathrm{i}\frac{{\mathrm{e}}^{\mathrm{i}\rho (\pi -\theta )}{\mathrm{e}}^{-\rho -z}}{{(2\pi \rho )}^{1/2}}\sum _{s=0}^{\mathrm{\infty}}\frac{{h}_{2s}(\theta ,\alpha )}{{\rho}^{s}}).\end{array}$$ | ||

Here $\mathrm{erfc}$ is the complementary error function (§7.2(i)), and

2.11.16 | $$c(\theta )=\sqrt{2(1+{\mathrm{e}}^{\mathrm{i}\theta}+\mathrm{i}(\theta -\pi ))},$$ | ||

the branch being continuous with $c(\theta )\sim \pi -\theta $ as $\theta \to \pi $. Also,

2.11.17 | $${h}_{2s}(\theta ,\alpha )=\frac{{\mathrm{e}}^{\mathrm{i}\alpha (\pi -\theta )}}{1+{\mathrm{e}}^{-\mathrm{i}\theta}}{a}_{2s}(\theta ,\alpha )+{(-1)}^{s-1}\mathrm{i}\frac{1\cdot 3\cdot 5\mathrm{\cdots}(2s-1)}{{(c(\theta ))}^{2s+1}},$$ | ||

with ${a}_{2s}(\theta ,\alpha )$ as in (2.11.13), (2.11.14). In particular,

2.11.18 | $${h}_{0}(\theta ,\alpha )=\frac{{\mathrm{e}}^{\mathrm{i}\alpha (\pi -\theta )}}{1+{\mathrm{e}}^{-\mathrm{i}\theta}}-\frac{\mathrm{i}}{c(\theta )}.$$ | ||

For the sector $-3\pi +\delta \le \mathrm{ph}z\le \pi -\delta $ the conjugate result applies.

Two different asymptotic expansions in terms of elementary functions,
(2.11.6) and (2.11.7), are available for the
generalized exponential integral in the sector $$. That the change in their forms is discontinuous, even though
the function being approximated is analytic, is an example of the *Stokes
phenomenon*. Where should the change-over take place? Can it be accomplished
smoothly?

Satisfactory answers to these questions were found by Berry (1989); see also the survey by Paris and Wood (1995). These answers are linked to the terms involving the complementary error function in the more powerful expansions typified by the combination of (2.11.10) and (2.11.15). Thus if $0\le \theta \le \pi -\delta $ ($$), then $c(\theta )$ lies in the right half-plane. Hence from §7.12(i) $\mathrm{erfc}\left(\sqrt{\frac{1}{2}\rho}c(\theta )\right)$ is of the same exponentially-small order of magnitude as the contribution from the other terms in (2.11.15) when $\rho $ is large. On the other hand, when $\pi +\delta \le \theta \le 3\pi -\delta $, $c(\theta )$ is in the left half-plane and $\mathrm{erfc}\left(\sqrt{\frac{1}{2}\rho}c(\theta )\right)$ differs from 2 by an exponentially-small quantity. In the transition through $\theta =\pi $, $\mathrm{erfc}\left(\sqrt{\frac{1}{2}\rho}c(\theta )\right)$ changes very rapidly, but smoothly, from one form to the other; compare the graph of its modulus in Figure 2.11.1 in the case $\rho =100$.

In particular, on the ray $\theta =\pi $ greatest accuracy is achieved by (a) taking the average of the expansions (2.11.6) and (2.11.7), followed by (b) taking account of the exponentially-small contributions arising from the terms involving ${h}_{2s}(\theta ,\alpha )$ in (2.11.15).

Rays (or curves) on which one contribution in a compound asymptotic expansion
achieves maximum dominance over another are called *Stokes lines*
($\theta =\pi $ in the present example). As these lines are crossed
exponentially-small contributions, such as that in (2.11.7), are
“switched on” smoothly, in the manner of the graph in Figure
2.11.1.

Expansions similar to (2.11.15) can be constructed for many other
special functions. However, to enjoy the resurgence property
(§2.7(ii)) we often seek instead expansions in terms of the
$F$-functions introduced in §2.11(iii), leaving the connection of
the error-function type behavior as an implicit consequence of this property of
the $F$-functions. In this context the $F$-functions are called
*terminants*, a name introduced by Dingle (1973).

For illustration, we give re-expansions of the remainder terms in the expansions (2.7.8) arising in differential-equation theory. For notational convenience assume that the original differential equation (2.7.1) is normalized so that ${\lambda}_{2}-{\lambda}_{1}=1$. (This means that, if necessary, $z$ is replaced by $z/({\lambda}_{2}-{\lambda}_{1})$.) From (2.7.12), (2.7.13) it is then seen that the optimum number of terms, $n$, in (2.7.14) is approximately $|z|$. We set

2.11.19 | $${w}_{j}(z)={\mathrm{e}}^{{\lambda}_{j}z}{z}^{{\mu}_{j}}\sum _{s=0}^{n-1}\frac{{a}_{s,j}}{{z}^{s}}+{R}_{n}^{(j)}(z),$$ | ||

$j=1,2$, | |||

and expand

2.11.20 | ${R}_{n}^{(1)}(z)$ | $=\begin{array}{l}{(-1)}^{n-1}\mathrm{i}{\mathrm{e}}^{({\mu}_{2}-{\mu}_{1})\pi \mathrm{i}}{\mathrm{e}}^{{\lambda}_{2}z}{z}^{{\mu}_{2}}\\ \phantom{\rule{2em}{0ex}}\times \left({C}_{1}{\displaystyle \sum _{s=0}^{m-1}}{(-1)}^{s}{a}_{s,2}{\displaystyle \frac{{F}_{n+{\mu}_{2}-{\mu}_{1}-s}\left(z\right)}{{z}^{s}}}+{R}_{m,n}^{(1)}(z)\right),\end{array}$ | ||

2.11.21 | ${R}_{n}^{(2)}(z)$ | $=\begin{array}{l}{(-1)}^{n}\mathrm{i}{\mathrm{e}}^{({\mu}_{2}-{\mu}_{1})\pi \mathrm{i}}{\mathrm{e}}^{{\lambda}_{1}z}{z}^{{\mu}_{1}}\\ \phantom{\rule{2em}{0ex}}\times \left({C}_{2}{\displaystyle \sum _{s=0}^{m-1}}{(-1)}^{s}{a}_{s,1}{\displaystyle \frac{{F}_{n+{\mu}_{1}-{\mu}_{2}-s}\left(z{\mathrm{e}}^{-\pi \mathrm{i}}\right)}{{z}^{s}}}+{R}_{m,n}^{(2)}(z)\right),\end{array}$ | ||

with $m=0,1,2,\mathrm{\dots}$, and ${C}_{1},{C}_{2}$ as in (2.7.17). Then as $z\to \mathrm{\infty}$, with $|n-|z||$ bounded and $m$ fixed,

2.11.22 | ${R}_{m,n}^{(1)}(z)$ | $=\{\begin{array}{cc}O\left({\mathrm{e}}^{-|z|-z}{z}^{-m}\right),\hfill & |\mathrm{ph}z|\le \pi ,\hfill \\ O\left({z}^{-m}\right),\hfill & \pi \le |\mathrm{ph}z|\le \frac{5}{2}\pi -\delta ,\hfill \end{array}$ | ||

2.11.23 | ${R}_{m,n}^{(2)}(z)$ | $=\{\begin{array}{cc}O\left({\mathrm{e}}^{-|z|+z}{z}^{-m}\right),\hfill & 0\le \mathrm{ph}z\le 2\pi ,\hfill \\ O\left({z}^{-m}\right),\hfill & -\frac{3}{2}\pi +\delta \le \mathrm{ph}z\le 0\text{and}2\pi \le \mathrm{ph}z\le \frac{7}{2}\pi -\delta ,\hfill \end{array}$ | ||

uniformly with respect to $\mathrm{ph}z$ in each case.

The relevant Stokes lines are $\mathrm{ph}z=\pm \pi $ for ${w}_{1}(z)$, and $\mathrm{ph}z=0,2\pi $ for ${w}_{2}(z)$. In addition to achieving uniform exponential improvement, particularly in $|\mathrm{ph}z|\le \pi $ for ${w}_{1}(z)$, and $0\le \mathrm{ph}z\le 2\pi $ for ${w}_{2}(z)$, the re-expansions (2.11.20), (2.11.21) are resurgent.

For further details see Olde Daalhuis and Olver (1994). For error bounds see Dunster (1996c). For other examples see Boyd (1990b), Paris (1992a, b), and Wong and Zhao (2002b).

Often the process of re-expansion can be repeated any number of times. In this
way we arrive at *hyperasymptotic expansions*. For integrals, see
Berry and Howls (1991), Howls (1992), and
Paris and Kaminski (2001, Chapter 6). For second-order differential equations, see Olde Daalhuis and Olver (1995a),
Olde Daalhuis (1995, 1996), and
Murphy and Wood (1997).

The transformations in §3.9 for summing slowly convergent series can also be very effective when applied to divergent asymptotic series.

A simple example is provided by Euler’s transformation (§3.9(ii)) applied to the asymptotic expansion for the exponential integral (§6.12(i)):

2.11.24 | $${\mathrm{e}}^{x}{E}_{1}\left(x\right)\sim \sum _{s=0}^{\mathrm{\infty}}{(-1)}^{s}\frac{s!}{{x}^{s+1}},$$ | ||

$x\to +\mathrm{\infty}$. | |||

Taking $x=5$ and rounding to 5D, we obtain

2.11.25 | $${\mathrm{e}}^{5}{E}_{1}\left(5\right)=0.20000-0.04000+0.01600-0.00960+0.00768-0.00768+0.00922-0.01290+0.02064-0.03716+0.07432-\mathrm{\cdots}.$$ | ||

The numerically smallest terms are the 5th and 6th. Truncation after 5 terms yields 0.17408, compared with the correct value

2.11.26 | $${\mathrm{e}}^{5}{E}_{1}\left(5\right)=0.17042\mathrm{\dots}.$$ | ||

We now compute the forward differences ${\mathrm{\Delta}}^{j}$, $j=0,1,2,\mathrm{\dots}$, of the moduli of the rounded values of the first 6 neglected terms:

2.11.27 | ${\mathrm{\Delta}}^{0}$ | $=0.00768$, | ||

${\mathrm{\Delta}}^{1}$ | $=0.00154$, | |||

${\mathrm{\Delta}}^{2}$ | $=0.00214$, | |||

${\mathrm{\Delta}}^{3}$ | $=0.00192$, | |||

${\mathrm{\Delta}}^{4}$ | $=0.00280$, | |||

${\mathrm{\Delta}}^{5}$ | $=0.00434$. | |||

Multiplying these differences by ${(-1)}^{j}{2}^{-j-1}$ and summing, we obtain

2.11.28 | $$0.00384-0.00038+0.00027-0.00012+0.00009-0.00007=0.00363.$$ | ||

Subtraction of this result from the sum of the first 5 terms in (2.11.25) yields 0.17045, which is much closer to the true value.

The process just used is equivalent to re-expanding the remainder term of the original asymptotic series (2.11.24) in powers of $1/(x+5)$ and truncating the new series optimally. Further improvements in accuracy can be realized by making a second application of the Euler transformation; see Olver (1997b, pp. 540–543).

Similar improvements are achievable by Aitken’s ${\mathrm{\Delta}}^{2}$-process, Wynn’s $\u03f5$-algorithm, and other acceleration transformations. For a comprehensive survey see Weniger (1989).

The following example, based on Weniger (1996), illustrates their power.

For large $|z|$, with $|\mathrm{ph}z|\le \frac{3}{2}\pi -\delta $ ($$), the Whittaker function of the second kind has the asymptotic expansion (§13.19)

2.11.29 | $${W}_{\kappa ,\mu}\left(z\right)\sim \sum _{n=0}^{\mathrm{\infty}}{a}_{n},$$ | ||

in which

2.11.30 | $${a}_{n}=\frac{{\mathrm{e}}^{-z/2}}{{z}^{n-\kappa}n!}\left({\mu}^{2}-{(\kappa -\frac{1}{2})}^{2}\right)\left({\mu}^{2}-{(\kappa -\frac{3}{2})}^{2}\right)\mathrm{\cdots}\left({\mu}^{2}-{(\kappa -n+\frac{1}{2})}^{2}\right).$$ | ||

With $z=1.0$, $\kappa =2.3$, $\mu =0.5$, the values of ${a}_{n}$ to 8D are supplied in the second column of Table 2.11.1.

$n$ | ${a}_{n}$ | ${s}_{n}$ | ${d}_{n}$ |
---|---|---|---|

0 | $\mathrm{0.60653\hspace{0.33em}066}$ | $\mathrm{0.60653\hspace{0.33em}066}$ | $\mathrm{0.60653\hspace{0.33em}066}$ |

1 | $-\mathrm{1.81352\hspace{0.33em}667}$ | $-\mathrm{1.20699\hspace{0.33em}601}$ | $-\mathrm{0.91106\hspace{0.33em}488}$ |

2 | $\mathrm{0.35363\hspace{0.33em}770}$ | $-\mathrm{0.85335\hspace{0.33em}831}$ | $-\mathrm{0.82413\hspace{0.33em}405}$ |

3 | $\mathrm{0.02475\hspace{0.33em}464}$ | $-\mathrm{0.82860\hspace{0.33em}367}$ | $-\mathrm{0.83323\hspace{0.33em}429}$ |

4 | $-\mathrm{0.00736\hspace{0.33em}451}$ | $-\mathrm{0.83596\hspace{0.33em}818}$ | $-\mathrm{0.83303\hspace{0.33em}750}$ |

5 | $\mathrm{0.00676\hspace{0.33em}062}$ | $-\mathrm{0.82920\hspace{0.33em}756}$ | $-\mathrm{0.83298\hspace{0.33em}901}$ |

6 | $-\mathrm{0.01125\hspace{0.33em}643}$ | $-\mathrm{0.84046\hspace{0.33em}399}$ | $-\mathrm{0.83299\hspace{0.33em}429}$ |

7 | $\mathrm{0.02796\hspace{0.33em}418}$ | $-\mathrm{0.81249\hspace{0.33em}981}$ | $-\mathrm{0.83299\hspace{0.33em}530}$ |

8 | $-\mathrm{0.09364\hspace{0.33em}504}$ | $-\mathrm{0.90614\hspace{0.33em}485}$ | $-\mathrm{0.83299\hspace{0.33em}504}$ |

9 | $\mathrm{0.39736\hspace{0.33em}710}$ | $-\mathrm{0.50877\hspace{0.33em}775}$ | $-\mathrm{0.83299\hspace{0.33em}501}$ |

10 | $-\mathrm{2.05001\hspace{0.33em}686}$ | $-\mathrm{2.55879\hspace{0.33em}461}$ | $-\mathrm{0.83299\hspace{0.33em}503}$ |

The next column lists the partial sums ${s}_{n}={a}_{0}+{a}_{1}+\mathrm{\dots}+{a}_{n}$. Optimum truncation occurs just prior to the numerically smallest term, that is, at ${s}_{4}$. Comparison with the true value

2.11.31 | $${W}_{2.3,0.5}\left(1.0\right)=-\mathrm{0.83299\hspace{0.33em}50268\hspace{0.33em}27526}\mathrm{\cdots}$$ | ||

shows that this direct estimate is correct to almost 3D.

The fourth column of Table 2.11.1 gives the results of applying the
following variant of *Levin’s transformation*:

2.11.32 | $${d}_{n}=\frac{{\sum}_{j=0}^{n}{(-1)}^{j}\left(\genfrac{}{}{0pt}{}{n}{j}\right){(j+1)}^{n-1}\frac{{s}_{j}}{{a}_{j+1}}}{{\sum}_{j=0}^{n}{(-1)}^{j}\left(\genfrac{}{}{0pt}{}{n}{j}\right){(j+1)}^{n-1}\frac{1}{{a}_{j+1}}}.$$ | ||

By $n=10$ we already have 8 correct decimals. Furthermore, on proceeding to higher values of $n$ with higher precision, much more accuracy is achievable. For example, using double precision ${d}_{20}$ is found to agree with (2.11.31) to 13D.

However, direct numerical transformations need to be used with care. Their extrapolation is based on assumed forms of remainder terms that may not always be appropriate for asymptotic expansions. For example, extrapolated values may converge to an accurate value on one side of a Stokes line (§2.11(iv)), and converge to a quite inaccurate value on the other.