# via divided differences

(0.001 seconds)

## 1—10 of 21 matching pages

##### 2: 19.36 Methods of Computation
Numerical differences between the variables of a symmetric integral can be reduced in magnitude by successive factors of 4 by repeated applications of the duplication theorem, as shown by (19.26.18). When the differences are moderately small, the iteration is stopped, the elementary symmetric functions of certain differences are calculated, and a polynomial consisting of a fixed number of terms of the sum in (19.19.7) is evaluated. … Here $R_{C}$ is computed either by the duplication algorithm in Carlson (1995) or via (19.2.19). … The cases $k_{c}^{2}/2\leq p<\infty$ and $-\infty require different treatment for numerical purposes, and again precautions are needed to avoid cancellations.
##### 3: 18.30 Associated OP’s
Note that this is the same recurrence as in (18.2.8) for the traditional OP’s, but with a different initialization. … $F(z)$ and $F_{n}(z)$ of (18.30.23) and (18.30.24) are, also, precisely those of (18.2.34) and (18.2.35), now expressed via the traditional, $A_{n}$, $B_{n}$, $C_{n}$ coefficients, rather than the monic, $\alpha_{n}$, $\beta_{n}$, recursion coefficients. … The ratio $p_{n}^{(0)}(z)/p_{n}(z)$, as defined here, thus provides the same statement of Markov’s Theorem, as in (18.2.9_5), but now in terms of differently obtained numerator and denominator polynomials. …Ismail (2009, §2.6) discusses this in a different $N_{n}/D_{n}$ notation; also note the assumption that $\mu_{0}=1$, made throughout that reference, Ismail (2009, p. 16). … Defining associated orthogonal polynomials and their relationship to their corecursive counterparts is particularly simple via use of the recursion relations for the monic, rather than via those for the traditional polynomials. …
##### 4: 3.6 Linear Difference Equations
###### §3.6 Linear Difference Equations
See also Gautschi (1967) and Gil et al. (2007a, Chapter 4) for the computation of recessive solutions via continued fractions. … The difference equation … …
##### 5: 11.13 Methods of Computation
Although the power-series expansions (11.2.1) and (11.2.2), and the Bessel-function expansions of §11.4(iv) converge for all finite values of $z$, they are cumbersome to use when $|z|$ is large owing to slowness of convergence and cancellation. For large $|z|$ and/or $|\nu|$ the asymptotic expansions given in §11.6 should be used instead. … Subsequently $\mathbf{H}_{\nu}\left(z\right)$ and $\mathbf{L}_{\nu}\left(z\right)$ are obtainable via (11.2.5) and (11.2.6). …
###### §11.13(v) Difference Equations
Sequences of values of $\mathbf{H}_{\nu}\left(z\right)$ and $\mathbf{L}_{\nu}\left(z\right)$, with $z$ fixed, can be computed by application of the inhomogeneous difference equations (11.4.23) and (11.4.25). …
##### 6: 18.18 Sums
18.18.13 $H_{n}\left(\lambda x\right)=\lambda^{n}\sum_{\ell=0}^{\left\lfloor n/2\right% \rfloor}\frac{{\left(-n\right)_{2\ell}}}{\ell!}(1-\lambda^{-2})^{\ell}H_{n-2% \ell}\left(x\right).$
18.18.17 $(2x)^{n}=n!\sum_{\ell=0}^{\left\lfloor n/2\right\rfloor}\frac{\lambda+n-2\ell}% {\lambda}\frac{1}{{\left(\lambda+1\right)_{n-\ell}}\,\ell!}C^{(\lambda)}_{n-2% \ell}\left(x\right).$
18.18.20 $(2x)^{n}=\sum_{\ell=0}^{\left\lfloor n/2\right\rfloor}\frac{{\left(-n\right)_{% 2\ell}}}{\ell!}H_{n-2\ell}\left(x\right).$
18.18.27 $\sum_{n=0}^{\infty}\frac{n!\,L^{(\alpha)}_{n}\left(x\right)L^{(\alpha)}_{n}% \left(y\right)}{{\left(\alpha+1\right)_{n}}}z^{n}=\frac{\Gamma\left(\alpha+1% \right)(xyz)^{-\frac{1}{2}\alpha}}{1-z}\*\exp\left(\frac{-(x+y)z}{1-z}\right)I% _{\alpha}\left(\frac{2(xyz)^{\frac{1}{2}}}{1-z}\right),$ $|z|<1$.
18.18.28 $\sum_{n=0}^{\infty}\frac{H_{n}\left(x\right)H_{n}\left(y\right)}{2^{n}n!}z^{n}% =(1-z^{2})^{-\frac{1}{2}}\exp\left(\frac{2xyz-(x^{2}+y^{2})z^{2}}{1-z^{2}}% \right),$ $|z|<1$.
##### 7: 22.20 Methods of Computation
###### §22.20(i) Via Theta Functions
A powerful way of computing the twelve Jacobian elliptic functions for real or complex values of both the argument $z$ and the modulus $k$ is to use the definitions in terms of theta functions given in §22.2, obtaining the theta functions via methods described in §20.14. … for $n\geq 1$, where the square root is chosen so that $\operatorname{ph}b_{n}=\tfrac{1}{2}(\operatorname{ph}a_{n-1}+\operatorname{ph}% b_{n-1})$, where $\operatorname{ph}a_{n-1}$ and $\operatorname{ph}b_{n-1}$ are chosen so that their difference is numerically less than $\pi$. … From (22.7.1), $k_{1}=\tfrac{1}{19}$ and $x/(1+k_{1})=0.19$. … If $k=k^{\prime}=1/\sqrt{2}$, then three iterations of (22.20.1) give $M=0.84721\;30848$, and from (22.20.6) $K=\pi/(2M)=1.85407\;46773$ — in agreement with the value of $\left(\Gamma\left(\tfrac{1}{4}\right)\right)^{2}/\left(4\sqrt{\pi}\right)$; compare (23.17.3) and (23.22.2). …
##### 8: 18.19 Hahn Class: Definitions
The Askey scheme extends the three families of classical OP’s (Jacobi, Laguerre and Hermite) with eight further families of OP’s for which the role of the differentiation operator $\frac{\mathrm{d}}{\mathrm{d}x}$ in the case of the classical OP’s is played by a suitable difference operator. …
• 1.

Hahn class (or linear lattice class). These are OP’s $p_{n}(x)$ where the role of $\frac{\mathrm{d}}{\mathrm{d}x}$ is played by $\Delta_{x}$ or $\nabla_{x}$ or $\delta_{x}$ (see §18.1(i) for the definition of these operators). The Hahn class consists of four discrete and two continuous families.

• 2.

Wilson class (or quadratic lattice class). These are OP’s $p_{n}(x)=p_{n}(\lambda(y))$ ($p_{n}(x)$ of degree $n$ in $x$, $\lambda(y)$ quadratic in $y$) where the role of the differentiation operator is played by $\tfrac{\Delta_{y}}{\Delta_{y}(\lambda(y))}$ or $\tfrac{\nabla_{y}}{\nabla_{y}(\lambda(y))}$ or $\tfrac{\delta_{y}}{\delta_{y}(\lambda(y))}$. The Wilson class consists of two discrete and two continuous families.

• Tables 18.19.1 and 18.19.2 provide definitions via orthogonality and standardization (§§18.2(i), 18.2(iii)) for the Hahn polynomials $Q_{n}\left(x;\alpha,\beta,N\right)$, Krawtchouk polynomials $K_{n}\left(x;p,N\right)$, Meixner polynomials $M_{n}\left(x;\beta,c\right)$, and Charlier polynomials $C_{n}\left(x;a\right)$. … A special case of (18.19.8) is $w^{(1/2)}(x;\pi/2)=\frac{\pi}{\cosh\left(\pi x\right)}$.
##### 9: 2.11 Remainder Terms; Stokes Phenomenon
###### §2.11(vi) Direct Numerical Transformations
We now compute the forward differences $\Delta^{j}$, $j=0,1,2,\dots$, of the moduli of the rounded values of the first 6 neglected terms: …Multiplying these differences by $(-1)^{j}2^{-j-1}$ and summing, we obtain … For example, extrapolated values may converge to an accurate value on one side of a Stokes line (§2.11(iv)), and converge to a quite inaccurate value on the other.
##### 10: 18.1 Notation
###### $x$-Differences
Forward differences: … Backward differences: … Central differences in imaginary direction: …
• Associated OP’s are denoted via addition of the letter $c$ at the end of the listing of parameters in their usual notations.