Commit 195c16bd authored by Jim Hefferon's avatar Jim Hefferon

index through l

parent 8cd24b75
...@@ -28,12 +28,12 @@ Two other sources, available online, are ...@@ -28,12 +28,12 @@ Two other sources, available online, are
%\bigskip %\bigskip
%\par\noindent %\par\noindent
Formal mathematical statements come labelled as a Formal mathematical statements come labelled as a
\definend{Theorem}\index{theorem} \definend{Theorem} % \index{theorem}
for major points, for major points,
a \definend{Corollary}\index{corollary} a \definend{Corollary} % \index{corollary}
for results that follow immediately from for results that follow immediately from
a prior one, or a a prior one, or a
\definend{Lemma}\index{lemma} \definend{Lemma} %\index{lemma}
for results chiefly used to prove others. for results chiefly used to prove others.
Statements can be complex and have many parts. Statements can be complex and have many parts.
...@@ -266,7 +266,7 @@ is unique, even thouge no such number exists.) ...@@ -266,7 +266,7 @@ is unique, even thouge no such number exists.)
\appendsection{Techniques of Proof} \appendsection{Techniques of Proof}
\startword{Induction} \startword{Induction}
\index{induction} \index{induction, mathematical}
Many proofs are iterative, Many proofs are iterative,
``Here's why the statement is true for the number \( 0 \), ``Here's why the statement is true for the number \( 0 \),
it then follows for \( 1 \) and from there to \( 2 \) \ldots''. it then follows for \( 1 \) and from there to \( 2 \) \ldots''.
...@@ -285,12 +285,12 @@ Our induction proofs involve statements with one free natural number ...@@ -285,12 +285,12 @@ Our induction proofs involve statements with one free natural number
variable. variable.
Each proof has two steps. Each proof has two steps.
In the \definend{base step}\index{base step!of induction} In the \definend{base step}\index{base step, of induction proof}
we show that the statement holds for we show that the statement holds for
some intial number $i\in \N$. some intial number $i\in \N$.
Often this step is a routine, and short, verification. Often this step is a routine, and short, verification.
The second step, The second step,
the \definend{inductive step},\index{inductive step!of induction} the \definend{inductive step},\index{inductive step, of induction proof}
is more subtle; we will show that this implication holds: is more subtle; we will show that this implication holds:
\begin{equation*} \begin{equation*}
\begin{tabular}{l} \begin{tabular}{l}
...@@ -514,7 +514,7 @@ sets. ...@@ -514,7 +514,7 @@ sets.
\begin{center} \begin{center}
\includegraphics{appen.3} \includegraphics{appen.3}
\end{center} \end{center}
The \definend{intersection}\index{intersection}\index{set!intersection} is The \definend{intersection}\index{intersection, of sets}\index{set!intersection} is
\( P\intersection Q=\set{x\suchthat \text{$(x\in P)$ and $(x\in Q)$}} \). \( P\intersection Q=\set{x\suchthat \text{$(x\in P)$ and $(x\in Q)$}} \).
\begin{center} \begin{center}
\includegraphics{appen.2} \includegraphics{appen.2}
...@@ -567,7 +567,7 @@ Thus \( \Re^2 \) is the set of pairs of reals. ...@@ -567,7 +567,7 @@ Thus \( \Re^2 \) is the set of pairs of reals.
A \definend{function}\index{function} A \definend{function}\index{function}
or \definend{map}\index{map} $\map{f}{D}{C}$ is or \definend{map}\index{map} $\map{f}{D}{C}$ is
is an association between input is an association between input
\definend{arguments}\index{argument}\index{function!argument} \definend{arguments}\index{function!argument}
$x\in D$ $x\in D$
and output and output
\definend{values}\index{value}\index{function!value} \definend{values}\index{value}\index{function!value}
...@@ -608,7 +608,7 @@ We often use $y$ to denote $f(x)$. ...@@ -608,7 +608,7 @@ We often use $y$ to denote $f(x)$.
We also use the notation \( x\mapsunder{f} 16x^2-100 \), read We also use the notation \( x\mapsunder{f} 16x^2-100 \), read
`\( x \) maps under \( f \) to \( 16x^2-100 \)' or `\( x \) maps under \( f \) to \( 16x^2-100 \)' or
`\( 16x^2-100 \) is the `\( 16x^2-100 \) is the
\definend{image}\index{image!under a function}\index{function!image} \definend{image}\index{image, under a function}\index{function!image}
of \( x \)'. of \( x \)'.
A map such as \( x\mapsto \sin(1/x) \) is a A map such as \( x\mapsto \sin(1/x) \) is a
...@@ -632,11 +632,12 @@ that the number \( 0 \) plays in real number addition or that ...@@ -632,11 +632,12 @@ that the number \( 0 \) plays in real number addition or that
\( 1 \) plays in multiplication. \( 1 \) plays in multiplication.
In line with that analogy, we define a In line with that analogy, we define a
\definend{left inverse}\index{inverse!left} of a map \definend{left inverse}\index{inverse!function!left}\index{inverse!left}\index{left inverse} of a map
\( \map{f}{X}{Y} \) to be a \( \map{f}{X}{Y} \) to be a
function \( \map{g}{\text{range}(f)}{X} \) such that \( \composed{g}{f} \) function \( \map{g}{\text{range}(f)}{X} \) such that \( \composed{g}{f} \)
is the identity map on \( X \). is the identity map on \( X \).
A \definend{right inverse}\index{inverse!right} of \( f \) is a A \definend{right inverse}\index{inverse!function!right}\index{inverse!right}\index{right inverse}
of \( f \) is a
\( \map{h}{Y}{X} \) such that \( \composed{f}{h} \) is the identity. \( \map{h}{Y}{X} \) such that \( \composed{f}{h} \) is the identity.
For some $f$'s there is a map that is For some $f$'s there is a map that is
...@@ -648,7 +649,7 @@ If such a map exists then it is unique because if both \( g_1 \) and ...@@ -648,7 +649,7 @@ If such a map exists then it is unique because if both \( g_1 \) and
=g_2(x) \) =g_2(x) \)
(the middle equality comes from the associativity of function composition) (the middle equality comes from the associativity of function composition)
so we call it a \definend{two-sided inverse} or just so we call it a \definend{two-sided inverse} or just
\definend{``the'' inverse},\index{inverse}\index{inverse!two-sided}\index{function!inverse} \definend{``the'' inverse},\index{inverse}\index{inverse!two-sided}\index{function!inverse}\index{inverse!function}\index{inverse function}\index{inversion}
and denote it \( f^{-1} \). and denote it \( f^{-1} \).
For instance, the inverse of the function \( \map{f}{\Re}{\Re} \) For instance, the inverse of the function \( \map{f}{\Re}{\Re} \)
given by \( f(x)=2x-3 \) is the function \( \map{f^{-1}}{\Re}{\Re} \) given by \( f(x)=2x-3 \) is the function \( \map{f^{-1}}{\Re}{\Re} \)
...@@ -759,7 +760,7 @@ are covered. ...@@ -759,7 +760,7 @@ are covered.
\startword{Equivalence Relations} \startword{Equivalence Relations}
\index{relation!equivalence}\index{equivalence relation} \index{relation!equivalence}\index{equivalence relation}\index{equivalence}
We shall need to express that two objects are alike in some way. We shall need to express that two objects are alike in some way.
They aren't identical, but they are related They aren't identical, but they are related
(e.g., two integers that give the same remainder when divided by \( 2 \)). (e.g., two integers that give the same remainder when divided by \( 2 \)).
...@@ -853,7 +854,7 @@ We call each part of a partition an \definend{equivalence class}.% ...@@ -853,7 +854,7 @@ We call each part of a partition an \definend{equivalence class}.%
\index{equivalence!class}\index{class!equivalence} \index{equivalence!class}\index{class!equivalence}
We sometimes pick a single element of each equivalence class to be the We sometimes pick a single element of each equivalence class to be the
\definend{class representative}.% \definend{class representative}.%
\index{equivalence!representative}\index{representative} \index{equivalence!representative}\index{class!representative}\index{representative!class}
\begin{center} \begin{center}
\includegraphics{appen.13} \includegraphics{appen.13}
\end{center} \end{center}
......
No preview for this file type
...@@ -99,7 +99,7 @@ Solving gives the value of one of the variables. ...@@ -99,7 +99,7 @@ Solving gives the value of one of the variables.
The generalization of this example is \definend{Cramer's Rule}:% The generalization of this example is \definend{Cramer's Rule}:%
\index{determinant!Cramer's rule}% \index{determinant!Cramer's rule}%
\index{linear equation!solutions of!Cramer's rule} \index{linear equation!solution of!Cramer's rule}
if \( \deter{A}\neq 0 \) then the system \( A\vec{x}=\vec{b} \) has the if \( \deter{A}\neq 0 \) then the system \( A\vec{x}=\vec{b} \) has the
unique solution unique solution
$ $
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
% 2001-Jun-11 % 2001-Jun-11
\topic{Crystals} \topic{Crystals}
\index{crystals|(} \index{crystals|(}
Everyone has noticed that table salt\index{crystals!salt}\index{salt} Everyone has noticed that table salt\index{salt}
comes in little cubes. comes in little cubes.
\begin{center} \begin{center}
\includegraphics[height=1.25in]{salt.jpg} %1.25in tall \includegraphics[height=1.25in]{salt.jpg} %1.25in tall
...@@ -43,7 +43,7 @@ Then we can describe, say, the corner in the upper right of the picture above ...@@ -43,7 +43,7 @@ Then we can describe, say, the corner in the upper right of the picture above
as $3\vec{\beta}_1+2\vec{\beta}_2$. as $3\vec{\beta}_1+2\vec{\beta}_2$.
Another crystal from everyday experience is pencil lead. Another crystal from everyday experience is pencil lead.
It is \definend{graphite},\index{crystals!graphite} It is \definend{graphite},\index{graphite}
formed from carbon atoms arranged in this shape. formed from carbon atoms arranged in this shape.
\begin{center} %graphite \begin{center} %graphite
\includegraphics{ch2.10} \includegraphics{ch2.10}
...@@ -72,7 +72,7 @@ so this ...@@ -72,7 +72,7 @@ so this
\tag*{}\end{equation*} \tag*{}\end{equation*}
is a good basis. is a good basis.
Another familiar crystal formed from carbon is diamond.\index{crystals!diamond} Another familiar crystal formed from carbon is diamond.\index{diamond}
Like table salt it is built from cubes but the structure inside each Like table salt it is built from cubes but the structure inside each
cube is more complicated. cube is more complicated.
In addition to carbons at each corner, In addition to carbons at each corner,
......
...@@ -178,8 +178,8 @@ deleting row~\( i \) and column~\( j \) of \( T \) is the ...@@ -178,8 +178,8 @@ deleting row~\( i \) and column~\( j \) of \( T \) is the
\( i,j \) \definend{minor}\index{minor}\index{determinant!minor}% \( i,j \) \definend{minor}\index{minor}\index{determinant!minor}%
\index{matrix!minor} \index{matrix!minor}
of \( T \). of \( T \).
The \( i,j \) \definend{cofactor}\index{cofactor}\index{determinant!cofactor}% The \( i,j \) \definend{cofactor}\index{cofactor}\index{determinant!using cofactors}%
\index{matrix!cofactor} % \index{matrix!cofactor}
\( T_{i,j} \) of \( T \) is \( T_{i,j} \) of \( T \) is
\( (-1)^{i+j} \) times the determinant of the \( i,j \) minor of \( T \). \( (-1)^{i+j} \) times the determinant of the \( i,j \) minor of \( T \).
%</df:Minor> %</df:Minor>
......
...@@ -26,7 +26,7 @@ However ...@@ -26,7 +26,7 @@ However
it is not correct in other unit systems, because $16$ isn't the it is not correct in other unit systems, because $16$ isn't the
right constant in those systems. right constant in those systems.
We can fix that by attaching units to the $16$, making it a We can fix that by attaching units to the $16$, making it a
\definend{dimensional constant}\index{dimensional constant}. \definend{dimensional constant}\index{dimensional!constant}.
\begin{equation*} \begin{equation*}
\text{dist}=16\,\frac{\text{ft}}{\text{sec}^2}\cdot (\text{time})^2 \text{dist}=16\,\frac{\text{ft}}{\text{sec}^2}\cdot (\text{time})^2
\end{equation*} \end{equation*}
...@@ -48,12 +48,12 @@ Moving away from a specific unit system allows us to just say that ...@@ -48,12 +48,12 @@ Moving away from a specific unit system allows us to just say that
we measure all quantities here in combinations we measure all quantities here in combinations
of some units of length~$L$, mass~$M$, and time~$T$. of some units of length~$L$, mass~$M$, and time~$T$.
These three are our These three are our
\definend{dimensions}\index{dimension!physical}. \definend{physical dimensions}\index{physical dimension}.
For instance, we could measure velocity For instance, we could measure velocity
in $\text{feet}/\text{second}$ in $\text{feet}/\text{second}$
or $\text{fathoms}/\text{hour}$ but at all events it involves or $\text{fathoms}/\text{hour}$ but at all events it involves
a unit of length divided by a unit of time a unit of length divided by a unit of time
so the \definend{dimensional formula}\index{dimensional formula} so the \definend{dimensional formula}\index{dimensional!formula}
of velocity is $L/T$. of velocity is $L/T$.
Similarly, we could state density's dimensional formula as $M/L^3$. Similarly, we could state density's dimensional formula as $M/L^3$.
......
...@@ -8,9 +8,9 @@ In \emph{The Elements},\index{Euclid} Euclid considers two figures to be ...@@ -8,9 +8,9 @@ In \emph{The Elements},\index{Euclid} Euclid considers two figures to be
the same if they have the same size and shape. the same if they have the same size and shape.
That is, while the triangles below are not equal because they are not the same That is, while the triangles below are not equal because they are not the same
set of points, set of points,
they are \definend{congruent}\index{congruent figures}\Dash essentially they are, for Euclid's purposes, essentially
indistinguishable indistinguishable
for Euclid's purposes\Dash because we can imagine because we can imagine
picking the plane up, picking the plane up,
sliding it over and rotating it a bit, sliding it over and rotating it a bit,
although not warping or stretching it, although not warping or stretching it,
...@@ -27,8 +27,8 @@ map from the plane to itself. ...@@ -27,8 +27,8 @@ map from the plane to itself.
Euclid considers only transformations Euclid considers only transformations
that may slide or turn the plane but not bend or stretch it. that may slide or turn the plane but not bend or stretch it.
Accordingly, define a map $\map{f}{\Re^2}{\Re^2}$ to be Accordingly, define a map $\map{f}{\Re^2}{\Re^2}$ to be
\definend{distance-preserving}\index{distance-preserving}% \definend{distance-preserving}\index{distance-preserving map}%
\index{map!distance-preserving} \index{map!distance-preserving}\index{function!distance-preserving}
or a \definend{rigid motion}\index{rigid motion} or an or a \definend{rigid motion}\index{rigid motion} or an
\definend{isometry}\index{isometry} \definend{isometry}\index{isometry}
if for all points $P_1,P_2\in\Re^2$, if for all points $P_1,P_2\in\Re^2$,
......
...@@ -145,7 +145,7 @@ This algorithm is ...@@ -145,7 +145,7 @@ This algorithm is
or \definend{linear elimination}\index{linear elimination}% or \definend{linear elimination}\index{linear elimination}%
\index{system of linear equations!linear elimination}% \index{system of linear equations!linear elimination}%
\index{system of linear equations!elimination}% \index{system of linear equations!elimination}%
\index{elimination}). \index{elimination, Gaussian}).
% It transforms the system, step by step, into one % It transforms the system, step by step, into one
% with a form that we can easily solve. % with a form that we can easily solve.
% We will first illustrate how it goes and then we will see the % We will first illustrate how it goes and then we will see the
...@@ -1912,7 +1912,7 @@ is a rectangular array of numbers ...@@ -1912,7 +1912,7 @@ is a rectangular array of numbers
with \( m \)~\definend{rows}\index{matrix!row}\index{row} with \( m \)~\definend{rows}\index{matrix!row}\index{row}
and \( n \)~\definend{columns}\index{matrix!column}\index{column}. and \( n \)~\definend{columns}\index{matrix!column}\index{column}.
Each number in the matrix is an Each number in the matrix is an
\definend{entry}\index{matrix!entry}\index{entry}. \definend{entry}\index{matrix!entry}\index{entry, matrix}.
%</df:matrix> %</df:matrix>
\end{definition} \end{definition}
...@@ -2021,7 +2021,7 @@ is a matrix with a single column. ...@@ -2021,7 +2021,7 @@ is a matrix with a single column.
A matrix with a single row is a A matrix with a single row is a
\definend{row vector}\index{row!vector}\index{vector!row}. \definend{row vector}\index{row!vector}\index{vector!row}.
The entries of a vector are its The entries of a vector are its
\definend{components}\index{component}\index{vector!component}. \definend{components}\index{component of a vector}\index{vector!component}.
A column or row vector whose components are all zeros is a A column or row vector whose components are all zeros is a
\definend{zero vector}.\index{zero vector}\index{vector!zero} \definend{zero vector}.\index{zero vector}\index{vector!zero}
%</df:vector> %</df:vector>
...@@ -2069,7 +2069,9 @@ we first need to define these operations. ...@@ -2069,7 +2069,9 @@ we first need to define these operations.
\begin{definition} \label{df:VectorSum} \begin{definition} \label{df:VectorSum}
%<*df:VectorSum> %<*df:VectorSum>
The \definend{vector sum}\index{vector!sum}\index{sum!vector} of The
\definend{vector sum}\index{vector!sum}\index{sum!vector}\index{addition of vectors}
of
\( \vec{u} \) and \( \vec{v} \) is the vector of the sums. \( \vec{u} \) and \( \vec{v} \) is the vector of the sums.
\begin{equation*} \begin{equation*}
\vec{u}+\vec{v}= \vec{u}+\vec{v}=
......
...@@ -229,7 +229,8 @@ Another way to understand the vector sum is with the ...@@ -229,7 +229,8 @@ Another way to understand the vector sum is with the
\index{vector!sum} \index{vector!sum}
Draw the parallelogram Draw the parallelogram
formed by the vectors $\vec{v}$ and $\vec{w}$. formed by the vectors $\vec{v}$ and $\vec{w}$.
Then the sum $\vec{v}+\vec{w}$ extends along the diagonal Then the sum $\vec{v}+\vec{w}$\index{vector!sum}\index{addition of vectors}
extends along the diagonal
to the far corner. to the far corner.
\begin{center} \begin{center}
\includegraphics{ch1.15} \includegraphics{ch1.15}
...@@ -254,7 +255,7 @@ canonical representation ends at that point. ...@@ -254,7 +255,7 @@ canonical representation ends at that point.
\end{equation*} \end{equation*}
And, we do addition and scalar multiplication component-wise. And, we do addition and scalar multiplication component-wise.
Having considered points, we next turn to lines. Having considered points, we next turn to lines.\index{line}
In $\Re^2$, the line through \( (1,2) \) and \( (3,1) \) In $\Re^2$, the line through \( (1,2) \) and \( (3,1) \)
is comprised of (the endpoints of) the vectors in this set. is comprised of (the endpoints of) the vectors in this set.
\begin{equation*} \begin{equation*}
...@@ -356,7 +357,7 @@ $\set{\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2+\cdots+t_k\vec{v}_k ...@@ -356,7 +357,7 @@ $\set{\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2+\cdots+t_k\vec{v}_k
where \( \vec{v}_1,\ldots,\vec{v}_k\in\Re^n \) where \( \vec{v}_1,\ldots,\vec{v}_k\in\Re^n \)
and $k\leq n$ is a and $k\leq n$ is a
\definend{\( k \)-dimensional linear surface}\index{linear surface} \definend{\( k \)-dimensional linear surface}\index{linear surface}
(or \definend{\( k \)-flat}\index{flat}). (or \definend{\( k \)-flat}\index{flat, $k$-flat}).
For example, in $\Re^4$ For example, in $\Re^4$
\begin{equation*} \begin{equation*}
\set{\colvec[r]{2 \\ \pi \\ 3 \\ -0.5} \set{\colvec[r]{2 \\ \pi \\ 3 \\ -0.5}
......
...@@ -153,7 +153,7 @@ The answer is $x=5/2$ and $y=2$. ...@@ -153,7 +153,7 @@ The answer is $x=5/2$ and $y=2$.
%<*GaussJordanReduction> %<*GaussJordanReduction>
This extension of Gauss's Method is the This extension of Gauss's Method is the
\definend{Gauss-Jordan Method}\index{Gauss's Method!Gauss-Jordan Method} or \definend{Gauss-Jordan Method}\index{Gauss's Method!Gauss-Jordan Method} or
\definend{Gauss-Jordan reduction}.\index{linear equation!solution of!Gauss-Jordan}\index{Gauss-Jordan}\index{Gauss's Method!Gauss-Jordan} \definend{Gauss-Jordan reduction}.\index{linear equation!solution of!Gauss-Jordan}\index{Gauss's Method!Gauss-Jordan}
%</GaussJordanReduction> %</GaussJordanReduction>
% It goes past echelon form to a more refined, more specialized, % It goes past echelon form to a more refined, more specialized,
% matrix form. % matrix form.
......
...@@ -62,7 +62,8 @@ use the standard bases to represent it by a matrix $H$. ...@@ -62,7 +62,8 @@ use the standard bases to represent it by a matrix $H$.
Recall that $H$ factors into $H=PBQ$ Recall that $H$ factors into $H=PBQ$
where $P$ and $Q$ are nonsingular and $B$ is a partial-identity matrix. where $P$ and $Q$ are nonsingular and $B$ is a partial-identity matrix.
Recall also that nonsingular matrices Recall also that nonsingular matrices
factor into elementary matrices\index{matrix!elementary reduction}\index{elementary!matrix} factor into elementary
matrices\index{matrix!elementary reduction}\index{elementary reduction matrix}
$PBQ=T_nT_{n-1}\cdots T_sBT_{s-1}\cdots T_1$, $PBQ=T_nT_{n-1}\cdots T_sBT_{s-1}\cdots T_1$,
which are matrices that which are matrices that
come from the identity $I$ after one Gaussian row operation, come from the identity $I$ after one Gaussian row operation,
......
...@@ -60,7 +60,8 @@ Consequently in this chapter ...@@ -60,7 +60,8 @@ Consequently in this chapter
we shall use complex numbers for our scalars, we shall use complex numbers for our scalars,
including entries in vectors and matrices. including entries in vectors and matrices.
That is, we shift from studying vector spaces over the real numbers That is, we shift from studying vector spaces over the real numbers
to vector spaces over the complex numbers. to vector spaces over the
complex numbers.\index{complex numbers!vector space over}
Any real number is a complex number and Any real number is a complex number and
in this chapter most of the examples use in this chapter most of the examples use
only real numbers but only real numbers but
...@@ -94,7 +95,7 @@ Consider a polynomial\index{polynomial} ...@@ -94,7 +95,7 @@ Consider a polynomial\index{polynomial}
$p(x)=c_nx^n+\dots+c_1x+c_0$ with $p(x)=c_nx^n+\dots+c_1x+c_0$ with
leading coefficient\index{polynomial!leading coefficient} leading coefficient\index{polynomial!leading coefficient}
$c_n\neq 0$ and $n\geq 1$. $c_n\neq 0$ and $n\geq 1$.
The degree\index{polynomial!degree}\index{degree of polynomial} The degree\index{polynomial!degree}\index{degree of a polynomial}
of the polynomial is~$n$. of the polynomial is~$n$.
If $n=0$ then $p$ is a If $n=0$ then $p$ is a
constant polynomial\index{polynomial!constant}\index{constant polynomial} constant polynomial\index{polynomial!constant}\index{constant polynomial}
...@@ -204,7 +205,7 @@ roots of \( ax^2+bx+c \) are these ...@@ -204,7 +205,7 @@ roots of \( ax^2+bx+c \) are these
has no real number roots). has no real number roots).
A polynomial that cannot be factored into two lower-degree polynomials A polynomial that cannot be factored into two lower-degree polynomials
with real number coefficients is said to be irreducible over the with real number coefficients is said to be irreducible over the
reals.\index{irreducible}\index{polynomial!irreducible} reals.\index{irreducible polynomial}\index{polynomial!irreducible}
\begin{theorem} \label{th:CubicsAndHigherFactor} \begin{theorem} \label{th:CubicsAndHigherFactor}
%<*th:CubicsAndHigherFactor> %<*th:CubicsAndHigherFactor>
...@@ -275,7 +276,7 @@ into the product of two first degree polynomials. ...@@ -275,7 +276,7 @@ into the product of two first degree polynomials.
\end{equation*} \end{equation*}
\end{example} \end{example}
\begin{theorem}[Fundamental Theorem of Algebra] \label{th:FundThmAlg} \begin{theorem}[Fundamental Theorem of Algebra] \label{th:FundThmAlg}\index{Fundamental Theorem!of Algebra}
\hspace*{0em plus2em} \hspace*{0em plus2em}
%<*th:FundThmAlg> %<*th:FundThmAlg>
Polynomials with complex coefficients factor into linear Polynomials with complex coefficients factor into linear
...@@ -351,8 +352,8 @@ For instance, we shall call this ...@@ -351,8 +352,8 @@ For instance, we shall call this
\dots, \dots,
\colvec{0+0i \\ 0+0i \\ \vdots \\ 1+0i}} \colvec{0+0i \\ 0+0i \\ \vdots \\ 1+0i}}
\end{equation*} \end{equation*}
the \definend{standard basis\/}\index{basis!standard}% the \definend{standard basis}\index{standard basis}\index{basis!standard}%
\index{basis!standard over the complex numbers} \index{standard basis!complex number scalars}
for \( \C^n \) as a vector space over $\C$ for \( \C^n \) as a vector space over $\C$
and again denote it \( \stdbasis_n \). and again denote it \( \stdbasis_n \).
......
...@@ -2324,11 +2324,11 @@ where \( b\neq 0 \). ...@@ -2324,11 +2324,11 @@ where \( b\neq 0 \).
\begin{definition} \label{df:CharacteristicPoly} \begin{definition} \label{df:CharacteristicPoly}
%<*df:CharacteristicPoly> %<*df:CharacteristicPoly>
The \definend{characteristic polynomial of a square matrix}\index{characteristic polynomial}% The \definend{characteristic polynomial of a square matrix}\index{characteristic!polynomial}%
\index{matrix!characteristic polynomial} \index{matrix!characteristic polynomial}
\( T \) is the \( T \) is the
determinant \( \deter{T-x I} \) where \( x \) is a variable. determinant \( \deter{T-x I} \) where \( x \) is a variable.
The \definend{characteristic equation}\index{characteristic equation}% The \definend{characteristic equation}\index{characteristic!equation}%
\index{matrix!characteristic polynomial} \index{matrix!characteristic polynomial}
is $\deter{T-xI}=0$. is $\deter{T-xI}=0$.
The \definend{characteristic polynomial of a transformation} The \definend{characteristic polynomial of a transformation}
...@@ -3512,7 +3512,7 @@ Apply \nearbylemma{lm:DiagIffBasisOfEigens}. ...@@ -3512,7 +3512,7 @@ Apply \nearbylemma{lm:DiagIffBasisOfEigens}.
\cite{MathMag67p232} \cite{MathMag67p232}
Show that if \( A \) is an \( n \) square matrix and each row (column) Show that if \( A \) is an \( n \) square matrix and each row (column)
sums to \( c \) then \( c \) is a characteristic root of \( A \). sums to \( c \) then \( c \) is a characteristic root of \( A \).
(``Characteristic root'' is a synonym for eigenvalue.)\index{characteristic root}\index{root!characteristic} (``Characteristic root'' is a synonym for eigenvalue.)\index{characteristic!root}\index{root!characteristic}
\begin{answer} \begin{answer}
\answerasgiven % \answerasgiven %
If the argument of the characteristic function of \( A \) is set equal to If the argument of the characteristic function of \( A \) is set equal to
......
...@@ -688,7 +688,7 @@ A \definend{nilpotent matrix}\index{matrix!nilpotent}% ...@@ -688,7 +688,7 @@ A \definend{nilpotent matrix}\index{matrix!nilpotent}%
\index{nilpotent!matrix} \index{nilpotent!matrix}
is one with a power that is the zero matrix. is one with a power that is the zero matrix.
In either case, the least such power is the \definend{index of nilpotency}.% In either case, the least such power is the \definend{index of nilpotency}.%
\index{nilpotency!index}\index{index!of nilpotency} \index{nilpotency!index}\index{index, of nilpotency}
\end{definition} \end{definition}
\begin{example} \begin{example}
...@@ -1844,7 +1844,7 @@ such that \( n(\vec{\beta}_1)=\vec{\beta}_2 \). ...@@ -1844,7 +1844,7 @@ such that \( n(\vec{\beta}_1)=\vec{\beta}_2 \).
that is, prove that \( t \) restricted to the span has a range that is, prove that \( t \) restricted to the span has a range
that is a subset of the span. that is a subset of the span.
We say that the span is a \definend{\( t \)-invariant} We say that the span is a \definend{\( t \)-invariant}
subspace.\index{invariant!subspace} subspace.\index{invariant subspace}
\partsitem Prove that the restriction is nilpotent. \partsitem Prove that the restriction is nilpotent.
\partsitem Prove that the $t$-string \partsitem Prove that the $t$-string
is linearly independent and so is a basis for its span. is linearly independent and so is a basis for its span.
......
...@@ -423,7 +423,7 @@ The total on the right is the zero matrix. ...@@ -423,7 +423,7 @@ The total on the right is the zero matrix.
We refer to that result by saying that a We refer to that result by saying that a
matrix or map matrix or map
\definend{satisfies}\index{characteristic polynomial!satisfied by} \definend{satisfies}\index{characteristic!polynomial!satisfied by}
its characteristic polynomial. its characteristic polynomial.
\begin{lemma} \label{le:tSatisImpMinPolyDivides} \begin{lemma} \label{le:tSatisImpMinPolyDivides}
...@@ -1992,7 +1992,7 @@ condition. ...@@ -1992,7 +1992,7 @@ condition.
\begin{definition} \label{def:invariant} \begin{definition} \label{def:invariant}
Let \( \map{t}{V}{V} \) be a transformation. Let \( \map{t}{V}{V} \) be a transformation.
A subspace \( M \) is \definend{$t$ invariant}% A subspace \( M \) is \definend{$t$ invariant}%
\index{invariant subspace!definition}\index{subspace!invariant} \index{invariant subspace}\index{subspace!invariant}
if whenever \( \vec{m}\in M \) then \( t(\vec{m})\in M \) if whenever \( \vec{m}\in M \) then \( t(\vec{m})\in M \)
(shorter: \( t(M)\subseteq M \)). (shorter: \( t(M)\subseteq M \)).
\end{definition} \end{definition}
......
...@@ -37,10 +37,10 @@ and scalar multiplication ...@@ -37,10 +37,10 @@ and scalar multiplication
if \( \vec{v}\in V \) and \( r\in\Re \) then if \( \vec{v}\in V \) and \( r\in\Re \) then
\( h(r\cdot\vec{v})=r\cdot h(\vec{v}) \) \( h(r\cdot\vec{v})=r\cdot h(\vec{v}) \)
\end{center} \end{center}
is a \definend{homomorphism}\index{homomorphism}% is a \definend{homomorphism}\index{homomorphism}\index{linear map}%
\index{function!structure preserving!\see{homomorphism}}% \index{function!structure preserving!\see{homomorphism}}%
\index{vector space!homomorphism}\index{vector space!map} \index{vector space!homomorphism}\index{vector space!map}
or \definend{linear map}\index{linear map!see{homomorphism}}. or \definend{linear map}\index{linear map|seealso{homomorphism}}.
%</df:Homo> %</df:Homo>
\end{definition} \end{definition}
...@@ -282,7 +282,7 @@ let ...@@ -282,7 +282,7 @@ let
$B=\sequence{\vec{\beta}_1,\ldots,\vec{\beta}_n}$ $B=\sequence{\vec{\beta}_1,\ldots,\vec{\beta}_n}$
be a basis for~$V$. be a basis for~$V$.
A function defined on that basis $\map{f}{B}{W}$ A function defined on that basis $\map{f}{B}{W}$
is \definend{extended linearly}\index{extended linearly}\index{function!extended linearly}\index{linear extension of a function} is \definend{extended linearly}\index{extended, linearly}\index{function!extended linearly}\index{linear extension of a function}
to a function $\map{\hat{f}}{V}{W}$ if to a function $\map{\hat{f}}{V}{W}$ if
for all $\vec{v}\in V$ such that for all $\vec{v}\in V$ such that
$\vec{v}=c_1\vec{\beta}_1+\cdots+c_n\vec{\beta}_n$, $\vec{v}=c_1\vec{\beta}_1+\cdots+c_n\vec{\beta}_n$,
...@@ -320,7 +320,7 @@ like this one, using matrices. ...@@ -320,7 +320,7 @@ like this one, using matrices.
\begin{definition} \label{df:LinearTransformation} \begin{definition} \label{df:LinearTransformation}
%<*df:LinearTransformation> %<*df:LinearTransformation>
A linear map from a space into itself \( \map{t}{V}{V} \) is a A linear map from a space into itself \( \map{t}{V}{V} \) is a
\definend{linear transformation}\index{linear transformation!see{transformation}}. \definend{linear transformation}\index{linear transformation}\index{linear transformation|seealso{transformation}}.
%</df:LinearTransformation> %</df:LinearTransformation>
\end{definition} \end{definition}
...@@ -398,7 +398,7 @@ from \( V \) to \( W \). ...@@ -398,7 +398,7 @@ from \( V \) to \( W \).
%<*SpLinFcns> %<*SpLinFcns>
\noindent We denote the space of linear maps from $V$ to~$W$ by \noindent We denote the space of linear maps from $V$ to~$W$ by
\( \linmaps{V}{W} \).\index{linear maps!space of} \( \linmaps{V}{W} \).\index{linear maps, vector space of}
%</SpLinFcns> %</SpLinFcns>
\begin{proof} \begin{proof}
...@@ -1712,7 +1712,7 @@ is a member of $S$. ...@@ -1712,7 +1712,7 @@ is a member of $S$.
\begin{definition} \label{df:NullSpace} \begin{definition} \label{df:NullSpace}
%<*df:NullSpace> %<*df:NullSpace>
The \definend{null space}\index{homomorphism!null space}\index{null space} The \definend{null space}\index{homomorphism!null space}\index{null space}
or \definend{kernel}\index{kernel} of a linear map or \definend{kernel}\index{kernel, of linear map} of a linear map
\( \map{h}{V}{W} \) is the inverse image of $\zero_W$. \( \map{h}{V}{W} \) is the inverse image of $\zero_W$.
\begin{equation*} \begin{equation*}
\nullspace{h}=h^{-1}(\zero_W)=\set{\vec{v}\in V\suchthat h(\vec{v})=\zero_W} \nullspace{h}=h^{-1}(\zero_W)=\set{\vec{v}\in V\suchthat h(\vec{v})=\zero_W}
......
...@@ -2862,7 +2862,7 @@ perform the combination operation \( -2\rho_2+\rho_3 \). ...@@ -2862,7 +2862,7 @@ perform the combination operation \( -2\rho_2+\rho_3 \).
\begin{definition} \label{df:ElementaryReductionMatrices} \begin{definition} \label{df:ElementaryReductionMatrices}
%<*df:ElementaryReductionMatrices> %<*df:ElementaryReductionMatrices>
The \definend{elementary reduction matrices}% The \definend{elementary reduction matrices}%
\index{matrix!elementary reduction}\index{elementary!matrix} \index{matrix!elementary reduction}\index{elementary reduction matrix}
result from applying a one Gaussian operation to an identity matrix. result from applying a one Gaussian operation to an identity matrix.
\begin{enumerate} \begin{enumerate}
\item \( I\grstep{k\rho_i}M_i(k) \) for \( k\neq 0 \) \item \( I\grstep{k\rho_i}M_i(k) \) for \( k\neq 0 \)
...@@ -2875,7 +2875,7 @@ result from applying a one Gaussian operation to an identity matrix. ...@@ -2875,7 +2875,7 @@ result from applying a one Gaussian operation to an identity matrix.
\begin{lemma} \label{GrByMatMult} \begin{lemma} \label{GrByMatMult}
%<*lm:GrByMatMult> %<*lm:GrByMatMult>
Matrix multiplication can do Gaussian reduction. Matrix multiplication can do Gaussian reduction.\index{elementary reduction operations!by matrix multiplication}\index{elementary row operations!by matrix multiplication}\index{Gauss's Method!by matrix multiplication}
\begin{enumerate} \begin{enumerate}
\item If \( H\grstep{k\rho_i}G \) then \( M_i(k)H=G \). \item If \( H\grstep{k\rho_i}G \) then \( M_i(k)H=G \).
\item If \( H\grstep{\rho_i\leftrightarrow\rho_j}G \) \item If \( H\grstep{\rho_i\leftrightarrow\rho_j}G \)
......
...@@ -148,7 +148,7 @@ whose entries are nonnegative reals and whose columns sum to $1$. ...@@ -148,7 +148,7 @@ whose entries are nonnegative reals and whose columns sum to $1$.
A characteristic feature of A characteristic feature of
a Markov chain model is that it is a Markov chain model is that it is
\definend{historyless}\index{historyless}% \definend{historyless}\index{historyless process}%
\index{Markov chain!historyless} in that \index{Markov chain!historyless} in that
the next state depends only on the current state, the next state depends only on the current state,
not on any prior ones. not on any prior ones.
......
...@@ -84,7 +84,7 @@ The solution ...@@ -84,7 +84,7 @@ The solution
changes radically depending on the ninth digit, which explains why changes radically depending on the ninth digit, which explains why
an eight-place computer has trouble. an eight-place computer has trouble.
A problem that is very sensitive to inaccuracy or uncertainties in A problem that is very sensitive to inaccuracy or uncertainties in
the input values is \definend{ill-conditioned}.\index{ill-conditioned} the input values is \definend{ill-conditioned}.\index{ill-conditioned problem}
The above example gives one way in which a system can be The above example gives one way in which a system can be
difficult to solve on a computer. difficult to solve on a computer.
......
...@@ -327,11 +327,11 @@ Thus we can think of projective space as consisting of the Euclidean plane ...@@ -327,11 +327,11 @@ Thus we can think of projective space as consisting of the Euclidean plane
with some extra points adjoined \Dash with some extra points adjoined \Dash
the Euclidean plane is embedded in the projective plane. the Euclidean plane is embedded in the projective plane.
The extra points in projective space, the equatorial points, The extra points in projective space, the equatorial points,
are called \definend{ideal points}\index{ideal point}% are called \definend{ideal points}\index{ideal!point}%
\index{projective plane!ideal point} \index{projective plane!ideal point}
or \definend{points at infinity}\index{point!at infinity} or \definend{points at infinity}\index{point!at infinity}
and the equator is called the and the equator is called the
\definend{ideal line}\index{ideal line}% \definend{ideal line}\index{ideal!line}%
\index{projective plane!ideal line} or \index{projective plane!ideal line} or
\definend{line at infinity}\index{line at infinity} \definend{line at infinity}\index{line at infinity}
(it is not a Euclidean line, it is a projective line).