About the Project
2 Asymptotic ApproximationsAreas

§2.11 Remainder Terms; Stokes Phenomenon

Contents
  1. §2.11(i) Numerical Use of Asymptotic Expansions
  2. §2.11(ii) Connection Formulas
  3. §2.11(iii) Exponentially-Improved Expansions
  4. §2.11(iv) Stokes Phenomenon
  5. §2.11(v) Exponentially-Improved Expansions (continued)
  6. §2.11(vi) Direct Numerical Transformations

§2.11(i) Numerical Use of Asymptotic Expansions

When a rigorous bound or reliable estimate for the remainder term is unavailable, it is unsafe to judge the accuracy of an asymptotic expansion merely from the numerical rate of decrease of the terms at the point of truncation. Even when the series converges this is unwise: the tail needs to be majorized rigorously before the result can be guaranteed. For divergent expansions the situation is even more difficult. First, it is impossible to bound the tail by majorizing its terms. Secondly, the asymptotic series represents an infinite class of functions, and the remainder depends on which member we have in mind.

As an example consider

2.11.1 I(m)=0πcos(mt)t2+1dt,

with m a large integer. By integration by parts (§2.3(i))

2.11.2 I(m)(1)ms=1qs(π)m2s,
m,

with

2.11.3 q1(t) =2t(t2+1)2,
q2(t) =24(t3t)(t2+1)4,
q3(t) =240(3t510t3+3t)(t2+1)6.

On rounding to 5D, we have q1(π)=0.05318, q2(π)=0.04791, q3(π)=0.08985. Taking m=10 in (2.11.2), the first three terms give us the approximation

2.11.4 I(10)0.00053 18+0.00000 480.00000 01=0.00052 71.

But this answer is incorrect: to 7D I(10)=0.00045 58. The error term is, in fact, approximately 700 times the last term obtained in (2.11.4). The explanation is that (2.11.2) is a more accurate expansion for the function I(m)12πem than it is for I(m); see Olver (1997b, pp. 76–78).

In order to guard against this kind of error remaining undetected, the wanted function may need to be computed by another method (preferably nonasymptotic) for the smallest value of the (large) asymptotic variable x that is intended to be used. If the results agree within S significant figures, then it is likely—but not certain—that the truncated asymptotic series will yield at least S correct significant figures for larger values of x. For further discussion see Bosley (1996).

In both the modulus and phase of the asymptotic variable z need to be taken into account. Suppose an asymptotic expansion holds as z in any closed sector within α<phz<β, say, but not in αphzβ. Then numerical accuracy will disintegrate as the boundary rays phz=α, phz=β are approached. In consequence, practical application needs to be confined to a sector αphzβ well within the sector of validity, and independent evaluations carried out on the boundaries for the smallest value of |z| intended to be used. The choice of α and β is facilitated by a knowledge of the relevant Stokes lines; see §2.11(iv) below.

However, regardless whether we can bound the remainder, the accuracy achievable by direct numerical summation of a divergent asymptotic series is always limited. The rest of this section is devoted to general methods for increasing this accuracy.

§2.11(ii) Connection Formulas

From §8.19(i) the generalized exponential integral is given by

2.11.5 Ep(z)=ezzp1Γ(p)0ezttp11+tdt

when p>0 and |phz|<12π, and by analytic continuation for other values of p and z. Application of Watson’s lemma (§2.4(i)) yields

2.11.6 Ep(z)ezzs=0(1)s(p)szs

when p is fixed and z in any closed sector within |phz|<32π. As noted in §2.11(i), poor accuracy is yielded by this expansion as phz approaches 32π or 32π. However, on combining (2.11.6) with the connection formula (8.19.18), with m=1, we derive

2.11.7 Ep(z)2πiepπiΓ(p)zp1+ezzs=0(1)s(p)szs,

valid as z in any closed sector within 12π<phz<72π; compare (8.20.3). Since the ray phz=32π is well away from the new boundaries, the compound expansion (2.11.7) yields much more accurate results when phz32π. In effect, (2.11.7) “corrects” (2.11.6) by introducing a term that is relatively exponentially small in the neighborhood of phz=π, is increasingly significant as phz passes from π to 32π, and becomes the dominant contribution after phz passes 32π. See also §2.11(iv).

§2.11(iii) Exponentially-Improved Expansions

The procedure followed in §2.11(ii) enabled Ep(z) to be computed with as much accuracy in the sector πphz3π as the original expansion (2.11.6) in |phz|π. We now increase substantially the accuracy of (2.11.6) in |phz|π by re-expanding the remainder term.

Optimum truncation in (2.11.6) takes place at s=n1, with |p+n1|=|z|, approximately. Thus

2.11.8 n=ρp+α,

where z=ρeiθ, and |α| is bounded as n. From (2.11.5) and the identity

2.11.9 11+t=s=0n1(1)sts+(1)ntn1+t,
t1,

we have

2.11.10 Ep(z)=ezzs=0n1(1)s(p)szs+(1)n2πΓ(p)zp1Fn+p(z),

where

2.11.11 Fn+p(z)=ez2π0ezttn+p11+tdt=Γ(n+p)2πEn+p(z)zn+p1.

With n given by (2.11.8), we have

2.11.12 Fn+p(z)=ez2π0exp(ρ(teiθlnt))tα11+tdt.

For large ρ the integrand has a saddle point at t=eiθ. Following §2.4(iv), we rotate the integration path through an angle θ, which is valid by analytic continuation when π<θ<π. Then by application of Laplace’s method (§§2.4(iii) and 2.4(iv)), we have

2.11.13 Fn+p(z)ei(ρ+α)θ1+eiθeρz(2πρ)1/2s=0a2s(θ,α)ρs,
ρ,

uniformly when θ[π+δ,πδ] (δ>0) and |α| is bounded. The coefficients are rational functions of α and 1+eiθ, for example, a0(θ,α)=1, and

2.11.14 a2(θ,α)=112(6α26α+1)α1+eiθ+1(1+eiθ)2.

Owing to the factor eρ, that is, e|z| in (2.11.13), Fn+p(z) is uniformly exponentially small compared with Ep(z). For this reason the expansion of Ep(z) in |phz|πδ supplied by (2.11.8), (2.11.10), and (2.11.13) is said to be exponentially improved.

If we permit the use of nonelementary functions as approximants, then even more powerful re-expansions become available. One is uniformly valid for π+δphz3πδ with bounded |α|, and achieves uniform exponential improvement throughout 0phzπ:

2.11.15 Fn+p(z)(1)niepπi(12erfc(12ρc(θ))ieiρ(πθ)eρz(2πρ)1/2s=0h2s(θ,α)ρs).

Here erfc is the complementary error function (§7.2(i)), and

2.11.16 c(θ)=2(1+eiθ+i(θπ)),

the branch being continuous with c(θ)πθ as θπ. Also,

2.11.17 h2s(θ,α)=eiα(πθ)1+eiθa2s(θ,α)+(1)s1i135(2s1)(c(θ))2s+1,

with a2s(θ,α) as in (2.11.13), (2.11.14). In particular,

2.11.18 h0(θ,α)=eiα(πθ)1+eiθic(θ).

For the sector 3π+δphzπδ the conjugate result applies.

Further details for this example are supplied in Olver (1991a, 1994b). See also Paris and Kaminski (2001, Chapter 6), and Dunster (1996b, 1997).

§2.11(iv) Stokes Phenomenon

Two different asymptotic expansions in terms of elementary functions, (2.11.6) and (2.11.7), are available for the generalized exponential integral in the sector 12π<phz<32π. That the change in their forms is discontinuous, even though the function being approximated is analytic, is an example of the Stokes phenomenon. Where should the change-over take place? Can it be accomplished smoothly?

Satisfactory answers to these questions were found by Berry (1989); see also the survey by Paris and Wood (1995). These answers are linked to the terms involving the complementary error function in the more powerful expansions typified by the combination of (2.11.10) and (2.11.15). Thus if 0θπδ (<π), then c(θ) lies in the right half-plane. Hence from §7.12(i) erfc(12ρc(θ)) is of the same exponentially-small order of magnitude as the contribution from the other terms in (2.11.15) when ρ is large. On the other hand, when π+δθ3πδ, c(θ) is in the left half-plane and erfc(12ρc(θ)) differs from 2 by an exponentially-small quantity. In the transition through θ=π, erfc(12ρc(θ)) changes very rapidly, but smoothly, from one form to the other; compare the graph of its modulus in Figure 2.11.1 in the case ρ=100.

See accompanying text
Figure 2.11.1: Graph of |erfc(50c(θ))|. Magnify

In particular, on the ray θ=π greatest accuracy is achieved by (a) taking the average of the expansions (2.11.6) and (2.11.7), followed by (b) taking account of the exponentially-small contributions arising from the terms involving h2s(θ,α) in (2.11.15).

Rays (or curves) on which one contribution in a compound asymptotic expansion achieves maximum dominance over another are called Stokes lines (θ=π in the present example). As these lines are crossed exponentially-small contributions, such as that in (2.11.7), are “switched on” smoothly, in the manner of the graph in Figure 2.11.1.

For higher-order Stokes phenomena see Olde Daalhuis (2004b) and Howls et al. (2004).

§2.11(v) Exponentially-Improved Expansions (continued)

Expansions similar to (2.11.15) can be constructed for many other special functions. However, to enjoy the resurgence property (§2.7(ii)) we often seek instead expansions in terms of the F-functions introduced in §2.11(iii), leaving the connection of the error-function type behavior as an implicit consequence of this property of the F-functions. In this context the F-functions are called terminants, a name introduced by Dingle (1973).

For illustration, we give re-expansions of the remainder terms in the expansions (2.7.8) arising in differential-equation theory. For notational convenience assume that the original differential equation (2.7.1) is normalized so that λ2λ1=1. (This means that, if necessary, z is replaced by z/(λ2λ1).) From (2.7.12), (2.7.13) it is then seen that the optimum number of terms, n, in (2.7.14) is approximately |z|. We set

2.11.19 wj(z)=eλjzzμjs=0n1as,jzs+Rn(j)(z),
j=1,2,

and expand

2.11.20 Rn(1)(z) =(1)n1ie(μ2μ1)πieλ2zzμ2×(C1s=0m1(1)sas,2Fn+μ2μ1s(z)zs+Rm,n(1)(z)),
2.11.21 Rn(2)(z) =(1)nie(μ2μ1)πieλ1zzμ1×(C2s=0m1(1)sas,1Fn+μ1μ2s(zeπi)zs+Rm,n(2)(z)),

with m=0,1,2,, and C1,C2 as in (2.7.17). Then as z, with |n|z|| bounded and m fixed,

2.11.22 Rm,n(1)(z) ={O(e|z|zzm),|phz|π,O(zm),π|phz|52πδ,
2.11.23 Rm,n(2)(z) ={O(e|z|+zzm),0phz2π,O(zm),32π+δphz0 and 2πphz72πδ,

uniformly with respect to phz in each case.

The relevant Stokes lines are phz=±π for w1(z), and phz=0,2π for w2(z). In addition to achieving uniform exponential improvement, particularly in |phz|π for w1(z), and 0phz2π for w2(z), the re-expansions (2.11.20), (2.11.21) are resurgent.

For further details see Olde Daalhuis and Olver (1994). For error bounds see Dunster (1996c). For other examples see Boyd (1990b), Paris (1992a, b), and Wong and Zhao (2002b).

Often the process of re-expansion can be repeated any number of times. In this way we arrive at hyperasymptotic expansions. For integrals, see Berry and Howls (1991), Howls (1992), and Paris and Kaminski (2001, Chapter 6). For second-order differential equations, see Olde Daalhuis and Olver (1995a), Olde Daalhuis (1995, 1996), and Murphy and Wood (1997).

For higher-order differential equations, see Olde Daalhuis (1998a, b). The first of these two references also provides an introduction to the powerful Borel transform theory. In this connection see also Byatt-Smith (2000).

For nonlinear differential equations see Olde Daalhuis (2005a, b).

For another approach see Paris (2001a, b).

§2.11(vi) Direct Numerical Transformations

The transformations in §3.9 for summing slowly convergent series can also be very effective when applied to divergent asymptotic series.

A simple example is provided by Euler’s transformation (§3.9(ii)) applied to the asymptotic expansion for the exponential integral (§6.12(i)):

2.11.24 exE1(x)s=0(1)ss!xs+1,
x+.

Taking x=5 and rounding to 5D, we obtain

2.11.25 e5E1(5)=0.200000.04000+0.016000.00960+0.007680.00768+0.009220.01290+0.020640.03716+0.07432.

The numerically smallest terms are the 5th and 6th. Truncation after 5 terms yields 0.17408, compared with the correct value

2.11.26 e5E1(5)=0.17042.

We now compute the forward differences Δj, j=0,1,2,, of the moduli of the rounded values of the first 6 neglected terms:

2.11.27 Δ0 =0.00768,
Δ1 =0.00154,
Δ2 =0.00214,
Δ3 =0.00192,
Δ4 =0.00280,
Δ5 =0.00434.

Multiplying these differences by (1)j2j1 and summing, we obtain

2.11.28 0.003840.00038+0.000270.00012+0.000090.00007=0.00363.

Subtraction of this result from the sum of the first 5 terms in (2.11.25) yields 0.17045, which is much closer to the true value.

The process just used is equivalent to re-expanding the remainder term of the original asymptotic series (2.11.24) in powers of 1/(x+5) and truncating the new series optimally. Further improvements in accuracy can be realized by making a second application of the Euler transformation; see Olver (1997b, pp. 540–543).

Similar improvements are achievable by Aitken’s Δ2-process, Wynn’s ϵ-algorithm, and other acceleration transformations. For a comprehensive survey see Weniger (1989).

The following example, based on Weniger (1996), illustrates their power.

For large |z|, with |phz|32πδ (<32π), the Whittaker function of the second kind has the asymptotic expansion (§13.19)

2.11.29 Wκ,μ(z)n=0an,

in which

2.11.30 an=ez/2znκn!(μ2(κ12)2)(μ2(κ32)2)(μ2(κn+12)2).

With z=1.0, κ=2.3, μ=0.5, the values of an to 8D are supplied in the second column of Table 2.11.1.

Table 2.11.1: Whittaker functions with Levin’s transformation.
n an sn dn
0 0.60653 066 0.60653 066 0.60653 066
1 1.81352 667 1.20699 601 0.91106 488
2 0.35363 770 0.85335 831 0.82413 405
3 0.02475 464 0.82860 367 0.83323 429
4 0.00736 451 0.83596 818 0.83303 750
5 0.00676 062 0.82920 756 0.83298 901
6 0.01125 643 0.84046 399 0.83299 429
7 0.02796 418 0.81249 981 0.83299 530
8 0.09364 504 0.90614 485 0.83299 504
9 0.39736 710 0.50877 775 0.83299 501
10 2.05001 686 2.55879 461 0.83299 503

The next column lists the partial sums sn=a0+a1++an. Optimum truncation occurs just prior to the numerically smallest term, that is, at s4. Comparison with the true value

2.11.31 W2.3,0.5(1.0)=0.83299 50268 27526

shows that this direct estimate is correct to almost 3D.

The fourth column of Table 2.11.1 gives the results of applying the following variant of Levin’s transformation:

2.11.32 dn=j=0n(1)j(nj)(j+1)n1sjaj+1j=0n(1)j(nj)(j+1)n11aj+1.

By n=10 we already have 8 correct decimals. Furthermore, on proceeding to higher values of n with higher precision, much more accuracy is achievable. For example, using double precision d20 is found to agree with (2.11.31) to 13D.

However, direct numerical transformations need to be used with care. Their extrapolation is based on assumed forms of remainder terms that may not always be appropriate for asymptotic expansions. For example, extrapolated values may converge to an accurate value on one side of a Stokes line (§2.11(iv)), and converge to a quite inaccurate value on the other.