About the Project

Lanczos tridiagonalization of a symmetric matrix

AdvancedHelp

(0.003 seconds)

21—30 of 895 matching pages

21: 18.2 General Orthogonal Polynomials
The matrix on the left-hand side is an (infinite tridiagonal) Jacobi matrix. This matrix is symmetric iff c n = a n 1 ( n 1 ). … When the Jacobi matrix in (18.2.11_9) is truncated to an n × n matrixIn particular, a system of OP’s on a bounded interval is always complete. …
22: 29.15 Fourier Series and Chebyshev Series
A convenient way of constructing the coefficients, together with the eigenvalues, is as follows. Equations (29.6.4), with p = 1 , 2 , , n , (29.6.3), and A 2 n + 2 = 0 can be cast as an algebraic eigenvalue problem in the following way. …be the tridiagonal matrix with α p , β p , γ p as in (29.3.11), (29.3.12). …
29.15.9 [ A 1 , A 3 , , A 2 n + 1 ] T ,
The set of coefficients of this polynomial (without normalization) can also be found directly as an eigenvector of an ( n + 1 ) × ( n + 1 ) tridiagonal matrix; see Arscott and Khabaza (1962). …
23: 19.31 Probability Distributions
§19.31 Probability Distributions
R G ( x , y , z ) and R F ( x , y , z ) occur as the expectation values, relative to a normal probability distribution in 2 or 3 , of the square root or reciprocal square root of a quadratic form. More generally, let 𝐀 ( = [ a r , s ] ) and 𝐁 ( = [ b r , s ] ) be real positive-definite matrices with n rows and n columns, and let λ 1 , , λ n be the eigenvalues of 𝐀 𝐁 1 . If 𝐱 is a column vector with elements x 1 , x 2 , , x n and transpose 𝐱 T , then
19.31.1 𝐱 T 𝐀 𝐱 = r = 1 n s = 1 n a r , s x r x s ,
24: 35.9 Applications
§35.9 Applications
See James (1964), Muirhead (1982), Takemura (1984), Farrell (1985), and Chikuse (2003) for extensive treatments. … These references all use results related to the integral formulas (35.4.7) and (35.5.8). … In chemistry, Wei and Eichinger (1993) expresses the probability density functions of macromolecules in terms of generalized hypergeometric functions of matrix argument, and develop asymptotic approximations for these density functions. In the nascent area of applications of zonal polynomials to the limiting probability distributions of symmetric random matrices, one of the most comprehensive accounts is Rains (1998).
25: 3.8 Nonlinear Equations
If f ( a ) f ( b ) < 0 with a < b , then the interval [ a , b ] contains one or more zeros of f . … For the computation of zeros of orthogonal polynomials as eigenvalues of finite tridiagonal matrices (§3.5(vi)), see Gil et al. (2007a, pp. 205–207). For the computation of zeros of Bessel functions, Coulomb functions, and conical functions as eigenvalues of finite parts of infinite tridiagonal matrices, see Grad and Zakrajšek (1973), Ikebe (1975), Ikebe et al. (1991), Ball (2000), and Gil et al. (2007a, pp. 205–213). … It is called a Julia set. In general the Julia set of an analytic function f ( z ) is a fractal, that is, a set that is self-similar. …
26: 35.10 Methods of Computation
§35.10 Methods of Computation
See Yan (1992) for the F 1 1 and F 1 2 functions of matrix argument in the case m = 2 , and Bingham et al. (1992) for Monte Carlo simulation on 𝐎 ( m ) applied to a generalization of the integral (35.5.8). …
27: 1.1 Special Notation
x , y real variables.
𝐀 1 inverse of the square matrix 𝐀
det ( 𝐀 ) determinant of the square matrix 𝐀
tr ( 𝐀 ) trace of the square matrix 𝐀
𝐀 adjoint of the square matrix 𝐀
In the physics, applied maths, and engineering literature a common alternative to a ¯ is a , a being a complex number or a matrix; the Hermitian conjugate of 𝐀 is usually being denoted 𝐀 .
28: 1.3 Determinants, Linear Operators, and Spectral Expansions
The determinant of an upper or lower triangular, or diagonal, square matrix 𝐀 is the product of the diagonal elements det ( 𝐀 ) = i = 1 n a i i . … The adjoint of a matrix 𝐀 is the matrix 𝐀 such that 𝐀 𝐚 , 𝐛 = 𝐚 , 𝐀 𝐛 for all 𝐚 , 𝐛 𝐄 n . In the case of a real matrix 𝐀 = 𝐀 T and in the complex case 𝐀 = 𝐀 H . Real symmetric ( 𝐀 = 𝐀 T ) and Hermitian ( 𝐀 = 𝐀 H ) matrices are self-adjoint operators on 𝐄 n . … Let the columns of matrix 𝐒 be these eigenvectors 𝐚 1 , , 𝐚 n , then 𝐒 1 = 𝐒 H , and the similarity transformation (1.2.73) is now of the form 𝐒 H 𝐀 𝐒 = λ i δ i , j . …
29: 15.17 Mathematical Applications
For harmonic analysis it is more natural to represent hypergeometric functions as a Jacobi function (§15.9(ii)). …First, as spherical functions on noncompact Riemannian symmetric spaces of rank one, but also as associated spherical functions, intertwining functions, matrix elements of SL ( 2 , ) , and spherical functions on certain nonsymmetric Gelfand pairs. Harmonic analysis can be developed for the Jacobi transform either as a generalization of the Fourier-cosine transform (§1.14(ii)) or as a specialization of a group Fourier transform. … By considering, as a group, all analytic transformations of a basis of solutions under analytic continuation around all paths on the Riemann sheet, we obtain the monodromy group. …For a survey of this topic see Gray (2000).
30: 18.38 Mathematical Applications
Random Matrix Theory
Zhedanov Algebra
A symmetric Laurent polynomial is a function of the form …and e 1 , e 2 , e 3 , e 4 are the elementary symmetric polynomials in a , b , c , d given by a + b + c + d , a b + a c + + c d , a b c + a b d + a c d + b c d , a b c d , respectively. … The Dunkl type operator is a q -difference-reflection operator acting on Laurent polynomials and its eigenfunctions, the nonsymmetric Askey–Wilson polynomials, are linear combinations of the symmetric Laurent polynomial R n ( z ; a , b , c , d | q ) and the ‘anti-symmetric’ Laurent polynomial z 1 ( 1 a z ) ( 1 b z ) R n 1 ( z ; q a , q b , c , d | q ) , where R n ( z ) is given in (18.28.1_5). …