# residual vector

(0.001 seconds)

## 1—10 of 36 matching pages

##### 2: 3.2 Linear Algebra
###### Iterative Refinement
Because of rounding errors, the residual vector $\mathbf{r}=\mathbf{b}-\mathbf{A}\mathbf{x}$ is nonzero as a rule. … The $p$-norm of a vector $\mathbf{x}=[x_{1},\dots,x_{n}]^{\rm T}$ is given by … The sensitivity of the solution vector $\mathbf{x}$ in (3.2.1) to small perturbations in the matrix $\mathbf{A}$ and the vector $\mathbf{b}$ is measured by the condition numberLet $\mathbf{x}^{*}$ denote a computed solution of the system (3.2.1), with $\mathbf{r}=\mathbf{b}-\mathbf{A}\mathbf{x}^{*}$ again denoting the residual. …
##### 3: 1.18 Linear Second Order Differential Operators and Eigenfunction Expansions
A complex linear vector space $V$ is called an inner product space if an inner product $\left\langle u,v\right\rangle\in\mathbb{C}$ is defined for all $u,v\in V$ with the properties: (i) $\left\langle u,v\right\rangle$ is complex linear in $u$; (ii) $\left\langle u,v\right\rangle=\overline{\left\langle v,u\right\rangle}$; (iii) $\left\langle v,v\right\rangle\geq 0$; (iv) if $\left\langle v,v\right\rangle=0$ then $v=0$. …$V$ becomes a normed linear vector space. If $\left\|{v}\right\|=1$ then $v$ is normalized. …
• 3.

The residual spectrum. It consists of all $z\in\mathbb{C}$ for which $z-T$ is injective, but does not have dense range.

• If $T$ is self-adjoint (bounded or unbounded) then $\sigma(T)$ is a closed subset of $\mathbb{R}$ and the residual spectrum is empty. …
##### 4: 1.2 Elementary Algebra
###### Row and Column Vectors
and the corresponding transposed row vector of length $n$ is … Two vectors $\mathbf{u}$ and $\mathbf{v}$ are orthogonal if …
##### 5: Bibliography L
• E. Lindelöf (1905) Le Calcul des Résidus et ses Applications à la Théorie des Fonctions. Gauthier-Villars, Paris (French).
• A. E. Lynas-Gray (1993) VOIGTL – A fast subroutine for Voigt function evaluation on vector processors. Comput. Phys. Comm. 75 (1-2), pp. 135–142.
• ##### 6: 1.1 Special Notation
 $x,y$ real variables. … inner, or scalar, product for real or complex vectors or functions. … column vectors. the space of all $n$-dimensional vectors. …
##### 7: 21.1 Special Notation
 $g,h$ positive integers. … $g$-dimensional vectors, with all elements in $[0,1)$, unless stated otherwise. $j$th element of vector $\mathbf{a}$. … scalar product of the vectors $\mathbf{a}$ and $\mathbf{b}$. … set of $g$-dimensional vectors with elements in $S$. …
Lowercase boldface letters or numbers are $g$-dimensional real or complex vectors, either row or column depending on the context. …
##### 8: 21.6 Products
that is, $\mathcal{D}$ is the number of elements in the set containing all $h$-dimensional vectors obtained by multiplying $\mathbf{T}^{\mathrm{T}}$ on the right by a vector with integer elements. Two such vectors are considered equivalent if their difference is a vector with integer elements. …where $\mathbf{c}_{j}$ and $\mathbf{d}_{j}$ are arbitrary $h$-dimensional vectors. … Then …Thus $\boldsymbol{{\nu}}$ is a $g$-dimensional vector whose entries are either $0$ or $1$. …
##### 9: 1.3 Determinants, Linear Operators, and Spectral Expansions
###### Linear Operators in Finite Dimensional Vector Spaces
Square matices can be seen as linear operators because $\mathbf{A}(\alpha\mathbf{a}+\beta\mathbf{b})=\alpha\mathbf{A}\mathbf{a}+\beta% \mathbf{A}\mathbf{b}$ for all $\alpha,\beta\in\mathbb{C}$ and $\mathbf{a},\mathbf{b}\in\mathbf{E}_{n}$, the space of all $n$-dimensional vectors. … The adjoint of a matrix $\mathbf{A}$ is the matrix ${\mathbf{A}}^{*}$ such that $\left\langle\mathbf{A}\mathbf{a},\mathbf{b}\right\rangle=\left\langle\mathbf{a% },{\mathbf{A}}^{*}\mathbf{b}\right\rangle$ for all $\mathbf{a},\mathbf{b}\in\mathbf{E}_{n}$. … Assuming $\{\mathbf{a}_{i}\}$ is an orthonormal basis in $\mathbf{E}_{n}$, any vector $\mathbf{u}$ may be expanded as
1.3.20 $\mathbf{u}=\sum_{i=1}^{n}c_{i}\mathbf{a}_{i},$ $c_{i}=\left\langle\mathbf{u},\mathbf{a}_{i}\right\rangle$.
##### 10: 21.3 Symmetry and Quasi-Periodicity
21.3.4 $\theta\genfrac{[}{]}{0.0pt}{}{\boldsymbol{{\alpha}}+\mathbf{m}_{1}}{% \boldsymbol{{\beta}}+\mathbf{m}_{2}}\left(\mathbf{z}\middle|\boldsymbol{{% \Omega}}\right)=e^{2\pi i\boldsymbol{{\alpha}}\cdot\mathbf{m}_{2}}\theta% \genfrac{[}{]}{0.0pt}{}{\boldsymbol{{\alpha}}}{\boldsymbol{{\beta}}}\left(% \mathbf{z}\middle|\boldsymbol{{\Omega}}\right).$
21.3.5 $\theta\genfrac{[}{]}{0.0pt}{}{\boldsymbol{{\alpha}}}{\boldsymbol{{\beta}}}% \left(\mathbf{z}+\mathbf{m}_{1}+\boldsymbol{{\Omega}}\mathbf{m}_{2}\middle|% \boldsymbol{{\Omega}}\right)=e^{2\pi i\left(\boldsymbol{{\alpha}}\cdot\mathbf{% m}_{1}-\boldsymbol{{\beta}}\cdot\mathbf{m}_{2}-\frac{1}{2}\mathbf{m}_{2}\cdot% \boldsymbol{{\Omega}}\cdot\mathbf{m}_{2}-\mathbf{m}_{2}\cdot\mathbf{z}\right)}% \theta\genfrac{[}{]}{0.0pt}{}{\boldsymbol{{\alpha}}}{\boldsymbol{{\beta}}}% \left(\mathbf{z}\middle|\boldsymbol{{\Omega}}\right).$
21.3.6 $\theta\genfrac{[}{]}{0.0pt}{}{\boldsymbol{{\alpha}}}{\boldsymbol{{\beta}}}% \left(-\mathbf{z}\middle|\boldsymbol{{\Omega}}\right)=(-1)^{4\boldsymbol{{% \alpha}}\cdot\boldsymbol{{\beta}}}\theta\genfrac{[}{]}{0.0pt}{}{\boldsymbol{{% \alpha}}}{\boldsymbol{{\beta}}}\left(\mathbf{z}\middle|\boldsymbol{{\Omega}}% \right).$