In (1.2.1) and (1.2.3) and are nonnegative integers and . In (1.2.2), (1.2.4), and (1.2.5) is a positive integer. See also §26.3(i).
1.2.1 | |||
For complex the binomial coefficient is defined via (1.2.6).
1.2.2 | |||
1.2.3 | |||
1.2.4 | |||
1.2.5 | |||
where is or according as is even or odd.
See also §26.3.
1.2.10 | |||
where = last term of the series = .
1.2.11 | |||
. | |||
Let be distinct constants, and be a polynomial of degree less than . Then
If are positive integers and , then there exist polynomials , , such that
1.2.16 | |||
To find the polynomials , , multiply both sides by the denominator of the left-hand side and equate coefficients. See Chrystal (1959a, pp. 151–159).
The arithmetic mean of numbers is
1.2.17 | |||
The geometric mean and harmonic mean of positive numbers are given by
1.2.18 | |||
1.2.19 | |||
If is a nonzero real number, then the weighted mean of nonnegative numbers , and positive numbers with
1.2.20 | |||
is defined by
1.2.21 | |||
with the exception
1.2.22 | |||
and . | |||
1.2.23 | ||||
1.2.24 | ||||
For , ,
1.2.25 | ||||
and
1.2.26 | |||
The last two equations require for all .
The full index form of an matrix is
1.2.27 | |||
with matrix elements , where , are the row and column indices, respectively. A matrix is zero if all its elements are zero, denoted . A matrix is real if all its elements are real.
The transpose of = is the matrix
1.2.28 | |||
the complex conjugate is
1.2.29 | |||
the Hermitian conjugate is
1.2.30 | |||
Multiplication by a scalar is given by
1.2.31 | |||
For matrices , and of the same dimensions,
1.2.32 | |||
1.2.33 | |||
Multiplication of an matrix and an matrix , giving the matrix is defined iff . If defined, with
1.2.34 | |||
This is the row times column rule.
Assuming the indicated multiplications are defined: matrix multiplication is associative
1.2.35 | |||
distributive if and have the same dimensions
1.2.36 | |||
The transpose of the product is
1.2.37 | |||
All of the above are defined for , or square matrices of order n, note that matrix multiplication is not necessarily commutative; see §1.2(vi) for special properties of square matrices.
A column vector of length is an matrix
1.2.38 | |||
and the corresponding transposed row vector of length is
1.2.39 | |||
The column vector is often written as to avoid inconvenient typography. The zero vector has for .
Column vectors and of the same length have a scalar product
1.2.40 | |||
The dot product notation is reserved for the physical three-dimensional vectors of (1.6.2).
The scalar product has properties
1.2.41 | |||
for
1.2.42 | |||
and
1.2.43 | |||
if and only if .
Two vectors and are orthogonal if
1.2.44 | |||
The norm of a (real or complex) vector is
1.2.45 | |||
. | |||
Special cases are the Euclidean length or norm
1.2.46 | |||
the norm
1.2.47 | |||
and as
1.2.48 | |||
The norm is implied unless otherwise indicated. A vector of norm unity is normalized and every non-zero vector can be normalized via .
If
1.2.49 | |||
we have Hölder’s Inequality
1.2.50 | |||
which for is the Cauchy-Schwartz inequality
1.2.51 | |||
the equality holding iff is a scalar (real or complex) multiple of . The triangle inequality,
1.2.52 | |||
For similar and more inequalities see §1.7(i).
Square matrices (said to be of order ) dominate the use of matrices in the DLMF, and they have many special properties. Unless otherwise indicated, matrices are assumed square, of order ; and, when vectors are combined with them, these are of length .
The identity matrix , is defined as
1.2.53 | |||
A matrix is: a diagonal matrix if
1.2.54 | |||
for , | |||
a real symmetric matrix if
1.2.55 | |||
an Hermitian matrix if
1.2.56 | |||
a tridiagonal matrix if
1.2.57 | |||
for . | |||
is an upper or lower triangular matrix if all vanish for or , respectively.
If det() , has a unique inverse, , such that
1.2.60 | |||
A square matrix is singular if , otherwise it is non-singular. If then does not imply that ; if , then , as both sides may be multiplied by .
Given a square matrix and a vector . If the system of linear equations in unknowns,
1.2.61 | |||
has a unique solution, . If then, depending on , there is either no solution or there are infinitely many solutions, being the sum of a particular solution of (1.2.61) and any solution of . Numerical methods and issues for solution of (1.2.61) appear in §§3.2(i) to 3.2(iii).
The trace of is
1.2.62 | |||
Further,
1.2.63 | |||
1.2.64 | |||
and
1.2.65 | |||
If the matrices and are said to commute. The difference between and is the commutator denoted as
1.2.66 | |||
Let the norm, and the space of all -dimensional vectors. We take , but we can also restrict ourselves to vectors and matrices with only real elements. The norm of an order square matrix, , is
1.2.67 | |||
Then
1.2.68 | |||
and
1.2.69 | |||
A square matrix has an eigenvalue with corresponding eigenvector if
1.2.70 | |||
Here and may be complex even if is real. Eigenvalues are the roots of the polynomial equation
1.2.71 | |||
and for the corresponding eigenvectors one has to solve the linear system
1.2.72 | |||
Numerical methods and issues for solution of (1.2.72) appear in §§3.2(iv) to 3.2(vii).
Nonzero vectors are linearly independent if implies that all coefficients are zero. A matrix of order is non-defective if it has linearly independent (possibly complex) eigenvectors, otherwise is called defective. Non-defective matrices are precisely the matrices which can be diagonalized via a similarity transformation of the form
1.2.73 | |||
The columns of the invertible matrix are eigenvectors of , and is a diagonal matrix with the eigenvalues as diagonal elements. The diagonal elements are not necessarily distinct, and the number of identical (degenerate) diagonal elements is the multiplicity of that specific eigenvalue. The sum of all multiplicities is .
The matrix exponential is defined via
1.2.76 | |||
which converges, entry-wise or in norm, for all .