Commit 195c16bd authored by Jim Hefferon's avatar Jim Hefferon

index through l

parent 8cd24b75
......@@ -28,12 +28,12 @@ Two other sources, available online, are
%\bigskip
%\par\noindent
Formal mathematical statements come labelled as a
\definend{Theorem}\index{theorem}
\definend{Theorem} % \index{theorem}
for major points,
a \definend{Corollary}\index{corollary}
a \definend{Corollary} % \index{corollary}
for results that follow immediately from
a prior one, or a
\definend{Lemma}\index{lemma}
\definend{Lemma} %\index{lemma}
for results chiefly used to prove others.
Statements can be complex and have many parts.
......@@ -266,7 +266,7 @@ is unique, even thouge no such number exists.)
\appendsection{Techniques of Proof}
\startword{Induction}
\index{induction}
\index{induction, mathematical}
Many proofs are iterative,
``Here's why the statement is true for the number \( 0 \),
it then follows for \( 1 \) and from there to \( 2 \) \ldots''.
......@@ -285,12 +285,12 @@ Our induction proofs involve statements with one free natural number
variable.
Each proof has two steps.
In the \definend{base step}\index{base step!of induction}
In the \definend{base step}\index{base step, of induction proof}
we show that the statement holds for
some intial number $i\in \N$.
Often this step is a routine, and short, verification.
The second step,
the \definend{inductive step},\index{inductive step!of induction}
the \definend{inductive step},\index{inductive step, of induction proof}
is more subtle; we will show that this implication holds:
\begin{equation*}
\begin{tabular}{l}
......@@ -514,7 +514,7 @@ sets.
\begin{center}
\includegraphics{appen.3}
\end{center}
The \definend{intersection}\index{intersection}\index{set!intersection} is
The \definend{intersection}\index{intersection, of sets}\index{set!intersection} is
\( P\intersection Q=\set{x\suchthat \text{$(x\in P)$ and $(x\in Q)$}} \).
\begin{center}
\includegraphics{appen.2}
......@@ -567,7 +567,7 @@ Thus \( \Re^2 \) is the set of pairs of reals.
A \definend{function}\index{function}
or \definend{map}\index{map} $\map{f}{D}{C}$ is
is an association between input
\definend{arguments}\index{argument}\index{function!argument}
\definend{arguments}\index{function!argument}
$x\in D$
and output
\definend{values}\index{value}\index{function!value}
......@@ -608,7 +608,7 @@ We often use $y$ to denote $f(x)$.
We also use the notation \( x\mapsunder{f} 16x^2-100 \), read
`\( x \) maps under \( f \) to \( 16x^2-100 \)' or
`\( 16x^2-100 \) is the
\definend{image}\index{image!under a function}\index{function!image}
\definend{image}\index{image, under a function}\index{function!image}
of \( x \)'.
A map such as \( x\mapsto \sin(1/x) \) is a
......@@ -632,11 +632,12 @@ that the number \( 0 \) plays in real number addition or that
\( 1 \) plays in multiplication.
In line with that analogy, we define a
\definend{left inverse}\index{inverse!left} of a map
\definend{left inverse}\index{inverse!function!left}\index{inverse!left}\index{left inverse} of a map
\( \map{f}{X}{Y} \) to be a
function \( \map{g}{\text{range}(f)}{X} \) such that \( \composed{g}{f} \)
is the identity map on \( X \).
A \definend{right inverse}\index{inverse!right} of \( f \) is a
A \definend{right inverse}\index{inverse!function!right}\index{inverse!right}\index{right inverse}
of \( f \) is a
\( \map{h}{Y}{X} \) such that \( \composed{f}{h} \) is the identity.
For some $f$'s there is a map that is
......@@ -648,7 +649,7 @@ If such a map exists then it is unique because if both \( g_1 \) and
=g_2(x) \)
(the middle equality comes from the associativity of function composition)
so we call it a \definend{two-sided inverse} or just
\definend{``the'' inverse},\index{inverse}\index{inverse!two-sided}\index{function!inverse}
\definend{``the'' inverse},\index{inverse}\index{inverse!two-sided}\index{function!inverse}\index{inverse!function}\index{inverse function}\index{inversion}
and denote it \( f^{-1} \).
For instance, the inverse of the function \( \map{f}{\Re}{\Re} \)
given by \( f(x)=2x-3 \) is the function \( \map{f^{-1}}{\Re}{\Re} \)
......@@ -759,7 +760,7 @@ are covered.
\startword{Equivalence Relations}
\index{relation!equivalence}\index{equivalence relation}
\index{relation!equivalence}\index{equivalence relation}\index{equivalence}
We shall need to express that two objects are alike in some way.
They aren't identical, but they are related
(e.g., two integers that give the same remainder when divided by \( 2 \)).
......@@ -853,7 +854,7 @@ We call each part of a partition an \definend{equivalence class}.%
\index{equivalence!class}\index{class!equivalence}
We sometimes pick a single element of each equivalence class to be the
\definend{class representative}.%
\index{equivalence!representative}\index{representative}
\index{equivalence!representative}\index{class!representative}\index{representative!class}
\begin{center}
\includegraphics{appen.13}
\end{center}
......
No preview for this file type
......@@ -99,7 +99,7 @@ Solving gives the value of one of the variables.
The generalization of this example is \definend{Cramer's Rule}:%
\index{determinant!Cramer's rule}%
\index{linear equation!solutions of!Cramer's rule}
\index{linear equation!solution of!Cramer's rule}
if \( \deter{A}\neq 0 \) then the system \( A\vec{x}=\vec{b} \) has the
unique solution
$
......
......@@ -3,7 +3,7 @@
% 2001-Jun-11
\topic{Crystals}
\index{crystals|(}
Everyone has noticed that table salt\index{crystals!salt}\index{salt}
Everyone has noticed that table salt\index{salt}
comes in little cubes.
\begin{center}
\includegraphics[height=1.25in]{salt.jpg} %1.25in tall
......@@ -43,7 +43,7 @@ Then we can describe, say, the corner in the upper right of the picture above
as $3\vec{\beta}_1+2\vec{\beta}_2$.
Another crystal from everyday experience is pencil lead.
It is \definend{graphite},\index{crystals!graphite}
It is \definend{graphite},\index{graphite}
formed from carbon atoms arranged in this shape.
\begin{center} %graphite
\includegraphics{ch2.10}
......@@ -72,7 +72,7 @@ so this
\tag*{}\end{equation*}
is a good basis.
Another familiar crystal formed from carbon is diamond.\index{crystals!diamond}
Another familiar crystal formed from carbon is diamond.\index{diamond}
Like table salt it is built from cubes but the structure inside each
cube is more complicated.
In addition to carbons at each corner,
......
......@@ -178,8 +178,8 @@ deleting row~\( i \) and column~\( j \) of \( T \) is the
\( i,j \) \definend{minor}\index{minor}\index{determinant!minor}%
\index{matrix!minor}
of \( T \).
The \( i,j \) \definend{cofactor}\index{cofactor}\index{determinant!cofactor}%
\index{matrix!cofactor}
The \( i,j \) \definend{cofactor}\index{cofactor}\index{determinant!using cofactors}%
% \index{matrix!cofactor}
\( T_{i,j} \) of \( T \) is
\( (-1)^{i+j} \) times the determinant of the \( i,j \) minor of \( T \).
%</df:Minor>
......
......@@ -26,7 +26,7 @@ However
it is not correct in other unit systems, because $16$ isn't the
right constant in those systems.
We can fix that by attaching units to the $16$, making it a
\definend{dimensional constant}\index{dimensional constant}.
\definend{dimensional constant}\index{dimensional!constant}.
\begin{equation*}
\text{dist}=16\,\frac{\text{ft}}{\text{sec}^2}\cdot (\text{time})^2
\end{equation*}
......@@ -48,12 +48,12 @@ Moving away from a specific unit system allows us to just say that
we measure all quantities here in combinations
of some units of length~$L$, mass~$M$, and time~$T$.
These three are our
\definend{dimensions}\index{dimension!physical}.
\definend{physical dimensions}\index{physical dimension}.
For instance, we could measure velocity
in $\text{feet}/\text{second}$
or $\text{fathoms}/\text{hour}$ but at all events it involves
a unit of length divided by a unit of time
so the \definend{dimensional formula}\index{dimensional formula}
so the \definend{dimensional formula}\index{dimensional!formula}
of velocity is $L/T$.
Similarly, we could state density's dimensional formula as $M/L^3$.
......
......@@ -8,9 +8,9 @@ In \emph{The Elements},\index{Euclid} Euclid considers two figures to be
the same if they have the same size and shape.
That is, while the triangles below are not equal because they are not the same
set of points,
they are \definend{congruent}\index{congruent figures}\Dash essentially
they are, for Euclid's purposes, essentially
indistinguishable
for Euclid's purposes\Dash because we can imagine
because we can imagine
picking the plane up,
sliding it over and rotating it a bit,
although not warping or stretching it,
......@@ -27,8 +27,8 @@ map from the plane to itself.
Euclid considers only transformations
that may slide or turn the plane but not bend or stretch it.
Accordingly, define a map $\map{f}{\Re^2}{\Re^2}$ to be
\definend{distance-preserving}\index{distance-preserving}%
\index{map!distance-preserving}
\definend{distance-preserving}\index{distance-preserving map}%
\index{map!distance-preserving}\index{function!distance-preserving}
or a \definend{rigid motion}\index{rigid motion} or an
\definend{isometry}\index{isometry}
if for all points $P_1,P_2\in\Re^2$,
......
......@@ -145,7 +145,7 @@ This algorithm is
or \definend{linear elimination}\index{linear elimination}%
\index{system of linear equations!linear elimination}%
\index{system of linear equations!elimination}%
\index{elimination}).
\index{elimination, Gaussian}).
% It transforms the system, step by step, into one
% with a form that we can easily solve.
% We will first illustrate how it goes and then we will see the
......@@ -1912,7 +1912,7 @@ is a rectangular array of numbers
with \( m \)~\definend{rows}\index{matrix!row}\index{row}
and \( n \)~\definend{columns}\index{matrix!column}\index{column}.
Each number in the matrix is an
\definend{entry}\index{matrix!entry}\index{entry}.
\definend{entry}\index{matrix!entry}\index{entry, matrix}.
%</df:matrix>
\end{definition}
......@@ -2021,7 +2021,7 @@ is a matrix with a single column.
A matrix with a single row is a
\definend{row vector}\index{row!vector}\index{vector!row}.
The entries of a vector are its
\definend{components}\index{component}\index{vector!component}.
\definend{components}\index{component of a vector}\index{vector!component}.
A column or row vector whose components are all zeros is a
\definend{zero vector}.\index{zero vector}\index{vector!zero}
%</df:vector>
......@@ -2069,7 +2069,9 @@ we first need to define these operations.
\begin{definition} \label{df:VectorSum}
%<*df:VectorSum>
The \definend{vector sum}\index{vector!sum}\index{sum!vector} of
The
\definend{vector sum}\index{vector!sum}\index{sum!vector}\index{addition of vectors}
of
\( \vec{u} \) and \( \vec{v} \) is the vector of the sums.
\begin{equation*}
\vec{u}+\vec{v}=
......
......@@ -229,7 +229,8 @@ Another way to understand the vector sum is with the
\index{vector!sum}
Draw the parallelogram
formed by the vectors $\vec{v}$ and $\vec{w}$.
Then the sum $\vec{v}+\vec{w}$ extends along the diagonal
Then the sum $\vec{v}+\vec{w}$\index{vector!sum}\index{addition of vectors}
extends along the diagonal
to the far corner.
\begin{center}
\includegraphics{ch1.15}
......@@ -254,7 +255,7 @@ canonical representation ends at that point.
\end{equation*}
And, we do addition and scalar multiplication component-wise.
Having considered points, we next turn to lines.
Having considered points, we next turn to lines.\index{line}
In $\Re^2$, the line through \( (1,2) \) and \( (3,1) \)
is comprised of (the endpoints of) the vectors in this set.
\begin{equation*}
......@@ -356,7 +357,7 @@ $\set{\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2+\cdots+t_k\vec{v}_k
where \( \vec{v}_1,\ldots,\vec{v}_k\in\Re^n \)
and $k\leq n$ is a
\definend{\( k \)-dimensional linear surface}\index{linear surface}
(or \definend{\( k \)-flat}\index{flat}).
(or \definend{\( k \)-flat}\index{flat, $k$-flat}).
For example, in $\Re^4$
\begin{equation*}
\set{\colvec[r]{2 \\ \pi \\ 3 \\ -0.5}
......
......@@ -153,7 +153,7 @@ The answer is $x=5/2$ and $y=2$.
%<*GaussJordanReduction>
This extension of Gauss's Method is the
\definend{Gauss-Jordan Method}\index{Gauss's Method!Gauss-Jordan Method} or
\definend{Gauss-Jordan reduction}.\index{linear equation!solution of!Gauss-Jordan}\index{Gauss-Jordan}\index{Gauss's Method!Gauss-Jordan}
\definend{Gauss-Jordan reduction}.\index{linear equation!solution of!Gauss-Jordan}\index{Gauss's Method!Gauss-Jordan}
%</GaussJordanReduction>
% It goes past echelon form to a more refined, more specialized,
% matrix form.
......
......@@ -62,7 +62,8 @@ use the standard bases to represent it by a matrix $H$.
Recall that $H$ factors into $H=PBQ$
where $P$ and $Q$ are nonsingular and $B$ is a partial-identity matrix.
Recall also that nonsingular matrices
factor into elementary matrices\index{matrix!elementary reduction}\index{elementary!matrix}
factor into elementary
matrices\index{matrix!elementary reduction}\index{elementary reduction matrix}
$PBQ=T_nT_{n-1}\cdots T_sBT_{s-1}\cdots T_1$,
which are matrices that
come from the identity $I$ after one Gaussian row operation,
......
......@@ -60,7 +60,8 @@ Consequently in this chapter
we shall use complex numbers for our scalars,
including entries in vectors and matrices.
That is, we shift from studying vector spaces over the real numbers
to vector spaces over the complex numbers.
to vector spaces over the
complex numbers.\index{complex numbers!vector space over}
Any real number is a complex number and
in this chapter most of the examples use
only real numbers but
......@@ -94,7 +95,7 @@ Consider a polynomial\index{polynomial}
$p(x)=c_nx^n+\dots+c_1x+c_0$ with
leading coefficient\index{polynomial!leading coefficient}
$c_n\neq 0$ and $n\geq 1$.
The degree\index{polynomial!degree}\index{degree of polynomial}
The degree\index{polynomial!degree}\index{degree of a polynomial}
of the polynomial is~$n$.
If $n=0$ then $p$ is a
constant polynomial\index{polynomial!constant}\index{constant polynomial}
......@@ -204,7 +205,7 @@ roots of \( ax^2+bx+c \) are these
has no real number roots).
A polynomial that cannot be factored into two lower-degree polynomials
with real number coefficients is said to be irreducible over the
reals.\index{irreducible}\index{polynomial!irreducible}
reals.\index{irreducible polynomial}\index{polynomial!irreducible}
\begin{theorem} \label{th:CubicsAndHigherFactor}
%<*th:CubicsAndHigherFactor>
......@@ -275,7 +276,7 @@ into the product of two first degree polynomials.
\end{equation*}
\end{example}
\begin{theorem}[Fundamental Theorem of Algebra] \label{th:FundThmAlg}
\begin{theorem}[Fundamental Theorem of Algebra] \label{th:FundThmAlg}\index{Fundamental Theorem!of Algebra}
\hspace*{0em plus2em}
%<*th:FundThmAlg>
Polynomials with complex coefficients factor into linear
......@@ -351,8 +352,8 @@ For instance, we shall call this
\dots,
\colvec{0+0i \\ 0+0i \\ \vdots \\ 1+0i}}
\end{equation*}
the \definend{standard basis\/}\index{basis!standard}%
\index{basis!standard over the complex numbers}
the \definend{standard basis}\index{standard basis}\index{basis!standard}%
\index{standard basis!complex number scalars}
for \( \C^n \) as a vector space over $\C$
and again denote it \( \stdbasis_n \).
......
......@@ -2324,11 +2324,11 @@ where \( b\neq 0 \).
\begin{definition} \label{df:CharacteristicPoly}
%<*df:CharacteristicPoly>
The \definend{characteristic polynomial of a square matrix}\index{characteristic polynomial}%
The \definend{characteristic polynomial of a square matrix}\index{characteristic!polynomial}%
\index{matrix!characteristic polynomial}
\( T \) is the
determinant \( \deter{T-x I} \) where \( x \) is a variable.
The \definend{characteristic equation}\index{characteristic equation}%
The \definend{characteristic equation}\index{characteristic!equation}%
\index{matrix!characteristic polynomial}
is $\deter{T-xI}=0$.
The \definend{characteristic polynomial of a transformation}
......@@ -3512,7 +3512,7 @@ Apply \nearbylemma{lm:DiagIffBasisOfEigens}.
\cite{MathMag67p232}
Show that if \( A \) is an \( n \) square matrix and each row (column)
sums to \( c \) then \( c \) is a characteristic root of \( A \).
(``Characteristic root'' is a synonym for eigenvalue.)\index{characteristic root}\index{root!characteristic}
(``Characteristic root'' is a synonym for eigenvalue.)\index{characteristic!root}\index{root!characteristic}
\begin{answer}
\answerasgiven %
If the argument of the characteristic function of \( A \) is set equal to
......
......@@ -688,7 +688,7 @@ A \definend{nilpotent matrix}\index{matrix!nilpotent}%
\index{nilpotent!matrix}
is one with a power that is the zero matrix.
In either case, the least such power is the \definend{index of nilpotency}.%
\index{nilpotency!index}\index{index!of nilpotency}
\index{nilpotency!index}\index{index, of nilpotency}
\end{definition}
\begin{example}
......@@ -1844,7 +1844,7 @@ such that \( n(\vec{\beta}_1)=\vec{\beta}_2 \).
that is, prove that \( t \) restricted to the span has a range
that is a subset of the span.
We say that the span is a \definend{\( t \)-invariant}
subspace.\index{invariant!subspace}
subspace.\index{invariant subspace}
\partsitem Prove that the restriction is nilpotent.
\partsitem Prove that the $t$-string
is linearly independent and so is a basis for its span.
......
......@@ -423,7 +423,7 @@ The total on the right is the zero matrix.
We refer to that result by saying that a
matrix or map
\definend{satisfies}\index{characteristic polynomial!satisfied by}
\definend{satisfies}\index{characteristic!polynomial!satisfied by}
its characteristic polynomial.
\begin{lemma} \label{le:tSatisImpMinPolyDivides}
......@@ -1992,7 +1992,7 @@ condition.
\begin{definition} \label{def:invariant}
Let \( \map{t}{V}{V} \) be a transformation.
A subspace \( M \) is \definend{$t$ invariant}%
\index{invariant subspace!definition}\index{subspace!invariant}
\index{invariant subspace}\index{subspace!invariant}
if whenever \( \vec{m}\in M \) then \( t(\vec{m})\in M \)
(shorter: \( t(M)\subseteq M \)).
\end{definition}
......
......@@ -37,10 +37,10 @@ and scalar multiplication
if \( \vec{v}\in V \) and \( r\in\Re \) then
\( h(r\cdot\vec{v})=r\cdot h(\vec{v}) \)
\end{center}
is a \definend{homomorphism}\index{homomorphism}%
is a \definend{homomorphism}\index{homomorphism}\index{linear map}%
\index{function!structure preserving!\see{homomorphism}}%
\index{vector space!homomorphism}\index{vector space!map}
or \definend{linear map}\index{linear map!see{homomorphism}}.
or \definend{linear map}\index{linear map|seealso{homomorphism}}.
%</df:Homo>
\end{definition}
......@@ -282,7 +282,7 @@ let
$B=\sequence{\vec{\beta}_1,\ldots,\vec{\beta}_n}$
be a basis for~$V$.
A function defined on that basis $\map{f}{B}{W}$
is \definend{extended linearly}\index{extended linearly}\index{function!extended linearly}\index{linear extension of a function}
is \definend{extended linearly}\index{extended, linearly}\index{function!extended linearly}\index{linear extension of a function}
to a function $\map{\hat{f}}{V}{W}$ if
for all $\vec{v}\in V$ such that
$\vec{v}=c_1\vec{\beta}_1+\cdots+c_n\vec{\beta}_n$,
......@@ -320,7 +320,7 @@ like this one, using matrices.
\begin{definition} \label{df:LinearTransformation}
%<*df:LinearTransformation>
A linear map from a space into itself \( \map{t}{V}{V} \) is a
\definend{linear transformation}\index{linear transformation!see{transformation}}.
\definend{linear transformation}\index{linear transformation}\index{linear transformation|seealso{transformation}}.
%</df:LinearTransformation>
\end{definition}
......@@ -398,7 +398,7 @@ from \( V \) to \( W \).
%<*SpLinFcns>
\noindent We denote the space of linear maps from $V$ to~$W$ by
\( \linmaps{V}{W} \).\index{linear maps!space of}
\( \linmaps{V}{W} \).\index{linear maps, vector space of}
%</SpLinFcns>
\begin{proof}
......@@ -1712,7 +1712,7 @@ is a member of $S$.
\begin{definition} \label{df:NullSpace}
%<*df:NullSpace>
The \definend{null space}\index{homomorphism!null space}\index{null space}
or \definend{kernel}\index{kernel} of a linear map
or \definend{kernel}\index{kernel, of linear map} of a linear map
\( \map{h}{V}{W} \) is the inverse image of $\zero_W$.
\begin{equation*}
\nullspace{h}=h^{-1}(\zero_W)=\set{\vec{v}\in V\suchthat h(\vec{v})=\zero_W}
......
......@@ -2862,7 +2862,7 @@ perform the combination operation \( -2\rho_2+\rho_3 \).
\begin{definition} \label{df:ElementaryReductionMatrices}
%<*df:ElementaryReductionMatrices>
The \definend{elementary reduction matrices}%
\index{matrix!elementary reduction}\index{elementary!matrix}
\index{matrix!elementary reduction}\index{elementary reduction matrix}
result from applying a one Gaussian operation to an identity matrix.
\begin{enumerate}
\item \( I\grstep{k\rho_i}M_i(k) \) for \( k\neq 0 \)
......@@ -2875,7 +2875,7 @@ result from applying a one Gaussian operation to an identity matrix.
\begin{lemma} \label{GrByMatMult}
%<*lm:GrByMatMult>
Matrix multiplication can do Gaussian reduction.
Matrix multiplication can do Gaussian reduction.\index{elementary reduction operations!by matrix multiplication}\index{elementary row operations!by matrix multiplication}\index{Gauss's Method!by matrix multiplication}
\begin{enumerate}
\item If \( H\grstep{k\rho_i}G \) then \( M_i(k)H=G \).
\item If \( H\grstep{\rho_i\leftrightarrow\rho_j}G \)
......
......@@ -148,7 +148,7 @@ whose entries are nonnegative reals and whose columns sum to $1$.
A characteristic feature of
a Markov chain model is that it is
\definend{historyless}\index{historyless}%
\definend{historyless}\index{historyless process}%
\index{Markov chain!historyless} in that
the next state depends only on the current state,
not on any prior ones.
......
......@@ -84,7 +84,7 @@ The solution
changes radically depending on the ninth digit, which explains why
an eight-place computer has trouble.
A problem that is very sensitive to inaccuracy or uncertainties in
the input values is \definend{ill-conditioned}.\index{ill-conditioned}
the input values is \definend{ill-conditioned}.\index{ill-conditioned problem}
The above example gives one way in which a system can be
difficult to solve on a computer.
......
......@@ -327,11 +327,11 @@ Thus we can think of projective space as consisting of the Euclidean plane
with some extra points adjoined \Dash
the Euclidean plane is embedded in the projective plane.
The extra points in projective space, the equatorial points,
are called \definend{ideal points}\index{ideal point}%
are called \definend{ideal points}\index{ideal!point}%
\index{projective plane!ideal point}
or \definend{points at infinity}\index{point!at infinity}
and the equator is called the
\definend{ideal line}\index{ideal line}%
\definend{ideal line}\index{ideal!line}%
\index{projective plane!ideal line} or
\definend{line at infinity}\index{line at infinity}
(it is not a Euclidean line, it is a projective line).
......
......@@ -92,8 +92,7 @@ operations `+' and `\( \cdot \)' subject to these conditions.
%<*df:VectorSpace1>
Where \( \vec{v},\vec{w}\in V \),
(1)~their \definend{vector sum}\index{vector!sum}\index{sum!vector}%
\index{addition!vector}
(1)~their \definend{vector sum}\index{vector!sum}\index{sum!vector}\index{addition of vectors}
\( \vec{v}+\vec{w} \) is an element of \( V\/\).
If \( \vec{u},\vec{v},\vec{w}\in V \) then
(2)~\( \vec{v}+\vec{w}=\vec{w}+\vec{v} \) and
......
......@@ -2441,9 +2441,9 @@ not directly involving row vectors.
\begin{definition} \label{df:ColumnSpace}
%<*df:ColumnSpace>
The \definend{column space\/}\index{column space}\index{matrix!column space}
The \definend{column space}\index{column!space}\index{matrix!column space}
of a matrix is the span of the set of its columns.
The \definend{column rank\/}\index{column!rank}\index{rank!column}
The \definend{column rank}\index{column!rank}\index{rank!column}
is the dimension of the column space, the number of linearly independent
columns.
%</df:ColumnSpace>
......@@ -3475,7 +3475,7 @@ Where there are \( r \) independent equations, the general solution involves
\end{answer}
\item
Show that the transpose operation is
\definend{linear}:\index{linear!transpose operation}
linear:\index{transpose!is linear}
\begin{equation*}
\trans{(rA+sB)} = r\trans{A}+s\trans{B}
\end{equation*}
......@@ -3708,7 +3708,7 @@ Where there are \( r \) independent equations, the general solution involves
\definend{full row rank}\index{full row rank}\index{row rank!full}
if its row rank
is \( m \), and it has
\definend{full column rank}\index{full column rank}\index{column rank!full}
\definend{full column rank}\index{full column rank}\index{column!rank!full}
if its column rank is \( n \).
\begin{exparts}
\partsitem Show that
......@@ -4096,7 +4096,7 @@ has one and only one solution for any $x,y,z\in\Re$.
% each vector decomposes uniquely into a sum of vectors from the parts.
\begin{definition}
The \definend{concatenation}\index{concatenation}%
The \definend{concatenation}\index{concatenation of sequences}%
\index{sequence!concatenation}
of the sequences
$
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment