# maximum

(0.001 seconds)

## 11—20 of 73 matching pages

##### 11: 16.13 Appell Functions
16.13.1 ${F_{1}}\left(\alpha;\beta,\beta^{\prime};\gamma;x,y\right)=\sum_{m,n=0}^{% \infty}\frac{{\left(\alpha\right)_{m+n}}{\left(\beta\right)_{m}}{\left(\beta^{% \prime}\right)_{n}}}{{\left(\gamma\right)_{m+n}}m!n!}x^{m}y^{n},$ $\max\left(|x|,|y|\right)<1$,
16.13.3 ${F_{3}}\left(\alpha,\alpha^{\prime};\beta,\beta^{\prime};\gamma;x,y\right)=% \sum_{m,n=0}^{\infty}\frac{{\left(\alpha\right)_{m}}{\left(\alpha^{\prime}% \right)_{n}}{\left(\beta\right)_{m}}{\left(\beta^{\prime}\right)_{n}}}{{\left(% \gamma\right)_{m+n}}m!n!}x^{m}y^{n},$ $\max\left(|x|,|y|\right)<1$,
##### 12: 31.8 Solutions via Quadratures
31.8.3 $g=\tfrac{1}{2}\max\left(2\max_{0\leq k\leq 3}m_{k},1+N-(1+(-1)^{N})\left(% \tfrac{1}{2}+\min_{0\leq k\leq 3}m_{k}\right)\right).$
##### 13: Bibliography
• R. W. Abernathy and R. P. Smith (1993) Algorithm 724: Program to calculate F-percentiles. ACM Trans. Math. Software 19 (4), pp. 481–483.
• A. G. Adams (1969) Algorithm 39: Areas under the normal curve. The Computer Journal 12 (2), pp. 197–198.
• D. E. Amos (1983b) Algorithm 610. A portable FORTRAN subroutine for derivatives of the psi function. ACM Trans. Math. Software 9 (4), pp. 494–502.
• D. E. Amos (1980a) Algorithm 556: Exponential integrals. ACM Trans. Math. Software 6 (3), pp. 420–428.
• W. L. Anderson (1982) Algorithm 588. Fast Hankel transforms using related and lagged convolutions. ACM Trans. Math. Software 8 (4), pp. 369–370.
• ##### 14: 11.15 Approximations
• Newman (1984) gives polynomial approximations for $\mathbf{H}_{n}\left(x\right)$ for $n=0,1$, $0\leq x\leq 3$, and rational-fraction approximations for $\mathbf{H}_{n}\left(x\right)-Y_{n}\left(x\right)$ for $n=0,1$, $x\geq 3$. The maximum errors do not exceed 1.2×10⁻⁸ for the former and 2.5×10⁻⁸ for the latter.

• ##### 15: 16.21 Differential Equation
This equation is of order $\max(p,q)$. …
##### 16: 30.15 Signal Analysis
The maximum (or least upper bound) $\mathrm{B}$ of all numbers …
30.15.11 $\operatorname{arccos}\sqrt{\mathrm{B}}+\operatorname{arccos}\sqrt{\alpha}=% \operatorname{arccos}\sqrt{\Lambda_{0}},$
30.15.12 $\mathrm{B}=\left(\sqrt{\Lambda_{0}\alpha}+\sqrt{1-\Lambda_{0}}\sqrt{1-\alpha}% \right)^{2}.$
##### 17: Bibliography G
• W. Gautschi (1964b) Algorithm 236: Bessel functions of the first kind. Comm. ACM 7 (8), pp. 479–480.
• W. Gautschi (1969) Algorithm 363: Complex error function. Comm. ACM 12 (11), pp. 635.
• W. Gautschi (1973) Algorithm 471: Exponential integrals. Comm. ACM 16 (12), pp. 761–763.
• W. Gautschi (1977b) Algorithm 521: Repeated integrals of the coerror function. ACM Trans. Math. Software 3, pp. 301–302.
• A. Gil and J. Segura (1997) Evaluation of Legendre functions of argument greater than one. Comput. Phys. Comm. 105 (2-3), pp. 273–283.
• ##### 18: 3.2 Linear Algebra
The $p$-norm of a vector $\mathbf{x}=[x_{1},\dots,x_{n}]^{\rm T}$ is given by …
$\|\mathbf{x}\|_{\infty}=\max_{1\leq j\leq n}\left|x_{j}\right|.$
3.2.14 $\|\mathbf{A}\|_{p}=\max_{\mathbf{x}\neq\boldsymbol{{0}}}\frac{\|\mathbf{A}% \mathbf{x}\|_{p}}{\|\mathbf{x}\|_{p}}\,.$
$\|\mathbf{A}\|_{1}=\max_{1\leq k\leq n}\sum_{j=1}^{n}\left|a_{jk}\right|,$
$\|\mathbf{A}\|_{\infty}=\max_{1\leq j\leq n}\sum_{k=1}^{n}\left|a_{jk}\right|,$
##### 19: 13.10 Integrals
13.10.3 $\int_{0}^{\infty}e^{-zt}t^{b-1}{\mathbf{M}}\left(a,c,kt\right)\mathrm{d}t=% \Gamma\left(b\right)z^{-b}{{}_{2}{\mathbf{F}}_{1}}\left(a,b;c;\ifrac{k}{z}% \right),$ $\Re b>0$, $\Re z>\max\left(\Re k,0\right)$,
13.10.7 $\int_{0}^{\infty}e^{-zt}t^{b-1}U\left(a,c,t\right)\mathrm{d}t=\Gamma\left(b% \right)\Gamma\left(b-c+1\right)\*z^{-b}{{}_{2}{\mathbf{F}}_{1}}\left(a,b;a+b-c% +1;1-\frac{1}{z}\right),$ $\Re b>\max\left(\Re c-1,0\right)$, $\Re z>0$.
13.10.11 $\int_{0}^{\infty}t^{\lambda-1}U\left(a,b,t\right)\mathrm{d}t=\frac{\Gamma\left% (\lambda\right)\Gamma\left(a-\lambda\right)\Gamma\left(\lambda-b+1\right)}{% \Gamma\left(a\right)\Gamma\left(a-b+1\right)},$ $\max\left(\Re b-1,0\right)<\Re\lambda<\Re a$.
13.10.15 $\int_{0}^{\infty}t^{\frac{1}{2}\nu}U\left(a,b,t\right)J_{\nu}\left(2\sqrt{xt}% \right)\mathrm{d}t=\frac{\Gamma\left(\nu-b+2\right)}{\Gamma\left(a\right)}x^{% \frac{1}{2}\nu}U\left(\nu-b+2,\nu-a+2,x\right),$ $x>0$, $\max\left(\Re b-2,-1\right)<\Re\nu<2\Re a+\tfrac{1}{2}$,
13.10.16 $\int_{0}^{\infty}e^{-t}t^{\frac{1}{2}\nu}U\left(a,b,t\right)J_{\nu}\left(2% \sqrt{xt}\right)\mathrm{d}t=\Gamma\left(\nu-b+2\right)x^{\frac{1}{2}\nu}e^{-x}% {\mathbf{M}}\left(a,a-b+\nu+2,x\right),$ $x>0$, $\max\left(\Re b-2,-1\right)<\Re\nu$.
##### 20: 1.4 Calculus of One Variable
###### Maxima and Minima
as $\max(x_{j+1}-x_{j})\to 0$. …
###### §1.4(vii) Maxima and Minima
If $f(x)$ is twice-differentiable, and if also $f^{\prime}(x_{0})=0$ and $f^{\prime\prime}(x_{0})<0$ ($>0$), then $x=x_{0}$ is a local maximum (minimum) (§1.4(iii)) of $f(x)$. The overall maximum (minimum) of $f(x)$ on $[a,b]$ will either be at a local maximum (minimum) or at one of the end points $a$ or $b$. …