Commit dcc4189f authored by Jim Hefferon's avatar Jim Hefferon

spell checks for the third chapter

parent 5526a5ff
This diff is collapsed.
......@@ -11167,7 +11167,7 @@
\end{aligned}
\end{multline*}
(An alternate proof is to simply note that this is a
property of differentiation that is familar from calculus.)
property of differentiation that is familiar from calculus.)
These two maps are not inverses as this composition
does not act as the identity map on
......@@ -11687,7 +11687,7 @@
\colvec{f_1(\vec{v}) \\ f_2(\vec{v})}
\end{equation*}
They are linear because they are the composition of linear functions,
and the fact that the compoistion of linear functions is linear
and the fact that the composition of linear functions is linear
was part of the proof that isomorphism is an equivalence
relation (alternatively, the check that they are linear is
straightforward).
......@@ -12757,7 +12757,7 @@
\colvec[r]{2 \\ 0}-\colvec[r]{-1 \\ 0}=\colvec[r]{3 \\ 0}
\end{equation*}
A more systemmatic way to find the image of $\vec{e}_2$ is to
A more systematic way to find the image of $\vec{e}_2$ is to
use the given information to represent the transformation, and then
use that representation to determine the image.
Taking this for a basis,
......@@ -13344,7 +13344,7 @@
\end{equation*}
gives the additional information (beyond that there is at least one
solution) that there are infinitely many solutions.
Parametizing gives $c_2=-1+c_3$ and $c_1=1$, and so taking $c_3$ to
Parametrizing gives $c_2=-1+c_3$ and $c_1=1$, and so taking $c_3$ to
be zero gives a particular solution of $c_1=1$, $c_2=-1$, and
$c_3=0$ (which is, of course, the observation made at the start).
\end{exparts}
......@@ -13368,7 +13368,7 @@
\colvec[r]{0 \\ 0 \\ 1}
\!\mapsto\colvec[r]{3 \\ 4}
\end{equation*}
So, for this first one, we are asking whether thare are scalars such that
So, for this first one, we are asking whether there are scalars such that
\begin{equation*}
c_1\colvec[r]{1 \\ 0}+c_2\colvec[r]{1 \\ 1}
+c_3\colvec[r]{3 \\ 4}=\colvec[r]{1 \\ 3}
......@@ -13529,7 +13529,7 @@
\end{ans}
\begin{ans}{Three.III.2.16}
Let the matrix be $G$, and suppose that it rperesents $\map{g}{V}{W}$
Let the matrix be $G$, and suppose that it represents $\map{g}{V}{W}$
with respect to bases $B$ and $D$.
Because $G$ has two columns, $V$ is two-dimensional.
Because $G$ has two rows, $W$ is two-dimensional.
......@@ -13579,7 +13579,7 @@
\end{ans}
\begin{ans}{Three.III.2.18}
Recall that the represention map
Recall that the representation map
\begin{equation*}
V\mapsunder{\text{Rep}_{B}}\Re^n
\end{equation*}
......@@ -13710,7 +13710,7 @@
to its dot product with $\vec{x}$ is linear (this is a matrix-vector
product and so \nearbytheorem{th:MatIsLinMap} applies).
Thus the map under consideration $h_{\vec{x}}$ is linear because
it is the composistion of two linear maps.
it is the composition of two linear maps.
\begin{equation*}
\vec{v}\mapsto \rep{\vec{v}}{B}
\mapsto \vec{x}\cdot\rep{\vec{v}}{B}
......@@ -13898,7 +13898,7 @@
h_{1,j}\vec{\delta}_1+\dots+h_{i,j}\vec{\delta}_i
+\dots+h_{m,j}\vec{\delta}_m
\end{equation*}
and with respcet to $B,2\cdot D$ it also represents
and with respect to $B,2\cdot D$ it also represents
\( \map{h_2}{V}{W} \) sending
\begin{equation*}
\vec{\beta}_j\mapsto
......@@ -16421,7 +16421,7 @@
The proof tells us what how the bases change.
We start by swapping the first and second rows
of the representation with respect to $B$ to get a representation
with resepect to a new basis $B_1$.
with respect to a new basis $B_1$.
\begin{equation*}
\rep{1-x+3x^2-x^3}{B_1}=
\colvec[r]{1 \\ 0 \\ 1 \\ 2}_{B_1}
......@@ -16631,7 +16631,7 @@
\colvec[r]{1 \\ 1}=1\cdot\colvec[r]{1 \\ 0}
+1\cdot\colvec[r]{0 \\ 1}
\end{equation*}
give the other nonsinguar matrix.
give the other nonsingular matrix.
\begin{equation*}
\rep{\identity}{\hat{B},B}=\begin{mat}[r]
0 &1 \\
......@@ -17301,7 +17301,7 @@
Suppose that \( \vec{v}\in\Re^n \) with \( n>1 \).
If \( \vec{v}\neq\zero \) then we consider the line
\( \ell=\set{c\vec{v}\suchthat c\in\Re} \) and if \( \vec{v}=\zero \)
we take \( \ell \) to be any (nondegenerate) line at all
we take \( \ell \) to be any (non-degenerate) line at all
(actually, we needn't distinguish between these two cases\Dash see
the prior exercise).
Let \( v_1,\dots,v_n \) be the components of \( \vec{v} \);
......@@ -17329,7 +17329,7 @@
The dimension \( n=0 \) case is the trivial vector space, here
there is only one vector and so it cannot be expressed as the projection
of a different vector.
In the dimension $n=1$ case there is only one (nondegenerate) line,
In the dimension $n=1$ case there is only one (non-degenerate) line,
and every vector is in it, hence every vector is the projection only
of itself.
......@@ -17670,7 +17670,7 @@
\end{ans}
\begin{ans}{Three.VI.2.12}
We can paramatrize the given space can in this way.
We can parametrize the given space can in this way.
\begin{equation*}
\set{\colvec{x \\ y \\ z} \suchthat x=y-z}
=\set{\colvec[r]{1 \\ 1 \\ 0}\cdot y+\colvec[r]{-1 \\ 0 \\ 1}\cdot z
......@@ -17865,7 +17865,7 @@
meets the vertical dashed line
$\vec{v}-(1\cdot\vec{e}_1+2\cdot\vec{e}_2)$; this is what
first item of this question proved.
The Pythagorean theorem then gives that the hypoteneuse\Dash the
The Pythagorean theorem then gives that the hypotenuse\Dash the
segment from $\vec{v}$ to any other vector\Dash is longer than
the vertical dashed line.
......@@ -19644,7 +19644,7 @@
perform the row operations and, if needed, column operations
to reduce it to a partial-identity matrix.
We will then translate that into a factorization $H=PBQ$.
Subsitituting into the general matrix
Substituting into the general matrix
\begin{equation*}
\rep{r_\theta}{\stdbasis_2,\stdbasis_2}
\begin{mat}
......@@ -158,7 +158,7 @@ For contrast the next picture shows the effect of the map represented by
$C_{2,1}(1)$.
Here vectors are affected according to their
second component:
$\binom{x}{y}$ slides horozontally by twice $y$.
$\binom{x}{y}$ slides horizontally by twice $y$.
\begin{center}
\includegraphics{ch3.57}
\end{center}
......@@ -174,7 +174,7 @@ $H=T_nT_{n-1}\cdots T_jBT_{j-1}\cdots T_1$,
and so, in some sense, we have an understanding
of the action of any matrix $H$.
We will illustrate the usefullness of our understanding in two ways.
We will illustrate the usefulness of our understanding in two ways.
The first is that we will use it to prove something about linear maps.
Recall that under a linear map, the image of a subspace is a subspace
and thus the linear transformation $h$ represented by $H$ maps lines
......@@ -192,7 +192,7 @@ Therefore their composition also preserves lines.
% Thus, by understanding its components we can understand arbitrary square
% matrices $H$, in the sense that we can prove things about them.
The second way that we will illustrate the usefullness of
The second way that we will illustrate the usefulness of
our understanding is to apply it to Calculus.
Below is a picture
of the action of the one-variable real function \( y(x)=x^2+x \).
......@@ -430,7 +430,7 @@ is appealing both for its simplicity and for its usefulness.
perform the row operations and, if needed, column operations
to reduce it to a partial-identity matrix.
We will then translate that into a factorization $H=PBQ$.
Subsitituting into the general matrix
Substituting into the general matrix
\begin{equation*}
\rep{r_\theta}{\stdbasis_2,\stdbasis_2}
\begin{mat}
......
This diff is collapsed.
......@@ -616,7 +616,7 @@ record was 1954-May-06.
\textit{(This illustrates that there are data sets for which a
linear model is not right, and that the line of best fit doesn't
in that case have any predictive value.)}
In a highway resturant a trucker told me that his boss often sends
In a highway restaurant a trucker told me that his boss often sends
him by a roundabout route, using more gas
but paying lower bridge tolls.
He said that New York state sets the bridge
......
......@@ -2178,7 +2178,7 @@ classes,
the reduced echelon form matrices.
In this section we have followed that outline,
except that the appropriate notion of same-ness
except that the appropriate notion of sameness
here is vector space isomorphism.
First we defined isomorphism, saw some examples,
and established some properties.
......
......@@ -600,7 +600,7 @@ is more fruitful and more central to further progress.
\end{aligned}
\end{multline*}
(An alternate proof is to simply note that this is a
property of differentiation that is familar from calculus.)
property of differentiation that is familiar from calculus.)
These two maps are not inverses as this composition
does not act as the identity map on
......@@ -1295,7 +1295,7 @@ is more fruitful and more central to further progress.
\colvec{f_1(\vec{v}) \\ f_2(\vec{v})}
\end{equation*}
They are linear because they are the composition of linear functions,
and the fact that the compoistion of linear functions is linear
and the fact that the composition of linear functions is linear
was part of the proof that isomorphism is an equivalence
relation (alternatively, the check that they are linear is
straightforward).
......@@ -1486,7 +1486,7 @@ We lose that the domain
corresponds perfectly to the range.
What we retain, as the examples below illustrate,
is that a homomorphism describes how
the domain is ``like'' or ``analgous to'' the range.
the domain is ``like'' or ``analogous to'' the range.
\begin{example} \label{ex:RThreeHomoRTwo} %\label{exPicProj}
We think of $\Re^3$ as like $\Re^2$ except that vectors have an extra
......@@ -1802,7 +1802,7 @@ Equality holds if and only if the nullity of the map is $0$.
We know
that an isomorphism exists between two spaces
if and only if the dimension of the range equals the dimension of the domain.
We have now seen that for a homomorphism to exist a nexessary condition is that
We have now seen that for a homomorphism to exist a necessary condition is that
the dimension of the range must be less than or equal to the
dimension of the domain.
For instance, there is no homomorphism
......
......@@ -1214,7 +1214,7 @@ for any matrix there is an associated linear map.
\colvec[r]{2 \\ 0}-\colvec[r]{-1 \\ 0}=\colvec[r]{3 \\ 0}
\end{equation*}
A more systemmatic way to find the image of $\vec{e}_2$ is to
A more systematic way to find the image of $\vec{e}_2$ is to
use the given information to represent the transformation, and then
use that representation to determine the image.
Taking this for a basis,
......@@ -1988,7 +1988,7 @@ but we do not have particular spaces or bases in mind then
we often take the
domain and codomain to be $\Re^n$ and $\Re^m$ and use the standard
bases.
This is convienent because with the standard bases
This is convenient because with the standard bases
vector representation is transparent\Dash
the representation of $\vec{v}$ is $\vec{v}$.
(In this case the
......@@ -2081,7 +2081,7 @@ that superspace
(because any basis for the rangespace is a linearly independent subset
of the codomain
whose size is equal to the dimension of the codomain, and thus so this
basis for the reagespace must also be
basis for the rangespace must also be
a basis for the codomain).
For the other half,
......@@ -2257,7 +2257,7 @@ And, we shall see how to find the matrix that represents a map's inverse.
\end{equation*}
gives the additional information (beyond that there is at least one
solution) that there are infinitely many solutions.
Parametizing gives $c_2=-1+c_3$ and $c_1=1$, and so taking $c_3$ to
Parametrizing gives $c_2=-1+c_3$ and $c_1=1$, and so taking $c_3$ to
be zero gives a particular solution of $c_1=1$, $c_2=-1$, and
$c_3=0$ (which is, of course, the observation made at the start).
\end{exparts}
......@@ -2293,7 +2293,7 @@ And, we shall see how to find the matrix that represents a map's inverse.
\colvec[r]{0 \\ 0 \\ 1}
\!\mapsto\colvec[r]{3 \\ 4}
\end{equation*}
So, for this first one, we are asking whether thare are scalars such that
So, for this first one, we are asking whether there are scalars such that
\begin{equation*}
c_1\colvec[r]{1 \\ 0}+c_2\colvec[r]{1 \\ 1}
+c_3\colvec[r]{3 \\ 4}=\colvec[r]{1 \\ 3}
......@@ -2509,7 +2509,7 @@ And, we shall see how to find the matrix that represents a map's inverse.
domain.
\end{exparts}
\begin{answer}
Let the matrix be $G$, and suppose that it rperesents $\map{g}{V}{W}$
Let the matrix be $G$, and suppose that it represents $\map{g}{V}{W}$
with respect to bases $B$ and $D$.
Because $G$ has two columns, $V$ is two-dimensional.
Because $G$ has two rows, $W$ is two-dimensional.
......@@ -2574,7 +2574,7 @@ And, we shall see how to find the matrix that represents a map's inverse.
respect to \( D \).
Show that map is a linear transformation of \( \Re^n \).
\begin{answer}
Recall that the represention map
Recall that the representation map
\begin{equation*}
V\mapsunder{\text{Rep}_{B}}\Re^n
\end{equation*}
......@@ -2787,7 +2787,7 @@ And, we shall see how to find the matrix that represents a map's inverse.
to its dot product with $\vec{x}$ is linear (this is a matrix-vector
product and so \nearbytheorem{th:MatIsLinMap} applies).
Thus the map under consideration $h_{\vec{x}}$ is linear because
it is the composistion of two linear maps.
it is the composition of two linear maps.
\begin{equation*}
\vec{v}\mapsto \rep{\vec{v}}{B}
\mapsto \vec{x}\cdot\rep{\vec{v}}{B}
......
......@@ -400,7 +400,7 @@ no matter what domain and codomain bases we use.
h_{1,j}\vec{\delta}_1+\dots+h_{i,j}\vec{\delta}_i
+\dots+h_{m,j}\vec{\delta}_m
\end{equation*}
and with respcet to $B,2\cdot D$ it also represents
and with respect to $B,2\cdot D$ it also represents
\( \map{h_2}{V}{W} \) sending
\begin{equation*}
\vec{\beta}_j\mapsto
......@@ -466,7 +466,7 @@ no matter what domain and codomain bases we use.
\index{transpose!interaction with sum and scalar multiplication}
of a matrix $M$ is another matrix, whose $i,j$ entry is the
$j,i$ entry of $M$.
Verifiy these identities.
Verify these identities.
\begin{exparts}
\partsitem \( \trans{(G+H)}=\trans{G}+\trans{H} \)
\partsitem \( \trans{(r\cdot H)}=r\cdot\trans{H} \)
......@@ -2466,7 +2466,7 @@ is square and has with all entries zero except for ones in the main diagonal.
\end{definition}
\begin{example}
Here is the \( \nbyn{2} \) identity matrix leaving its multiplicand unchaged
Here is the \( \nbyn{2} \) identity matrix leaving its multiplicand unchanged
when it acts from the right.
\begin{equation*}
\begin{mat}[r]
......@@ -2916,7 +2916,7 @@ Until now we have taken the point of view that our primary objects of study
are vector spaces and the maps between them, and
have adopted matrices only for computational convenience.
This subsection show that this isn't the whole story.
Understanding matrices operations vy how the entries combine can
Understanding matrices operations by how the entries combine can
be useful also.
In the rest of this book we shall continue to focus on maps as the primary
objects but we will be pragmatic\Dash if the matrix point of view gives some
......@@ -3500,7 +3500,7 @@ clearer idea then we will go with it.
\end{answer}
\item
Combine the two generalizations of the identity matrix,
the one allowing entires to be other than ones, and the one allowing the
the one allowing entries to be other than ones, and the one allowing the
single one in each row and column to be off the diagonal.
What is the action of this type of matrix?
\begin{answer}
......@@ -5120,7 +5120,7 @@ elementary real number system can be interesting and useful.
items.
\end{exparts}
When two things multiply to give zero despite that neither is zero, each is
said to be a \definend{zero divisor}.\index{zero divison}
said to be a \definend{zero divisor}.\index{zero division}
Prove that no zero divisor is invertible.
\begin{answer}
For the answer to the items making up the first half, see
......
......@@ -90,7 +90,7 @@ map \( \map{\identity}{V}{V} \) with respect to those bases.
Left-multiplication by the change of basis matrix for \( B,D \)
converts a representation with respect to \( B \) to one with respect to
\( D \).
Conversly, if left-multiplication by a matrix changes bases
Conversely, if left-multiplication by a matrix changes bases
$M\cdot\rep{\vec{v}}{B}=\rep{\vec{v}}{D}$
then $M$ is a change of basis matrix.
\end{lemma}
......@@ -178,7 +178,7 @@ to some ending basis.
Because the matrix is nonsingular it will Gauss-Jordan reduce to the
identity.
If the matrix is the identity~$I$ then the statement is obvious.
Otherwise there are elementatry reduction matrices such that
Otherwise there are elementary reduction matrices such that
$R_r\cdots R_1\cdot M=I$ with $r\geq 1$.
Elementary matrices are invertible and their inverses are also elementary
so multiplying both sides of that equation from the left
......@@ -608,7 +608,7 @@ the same space, and where the map is the identity map.
\end{equation*}
\end{answer}
\item
Conside the vector space of real-valued functions with basis
Consider the vector space of real-valued functions with basis
\( \sequence{\sin(x),\cos(x)} \).
Show that \( \sequence{2\sin(x)+\cos(x),3\cos(x)} \)
is also a basis for this space.
......@@ -789,7 +789,7 @@ the same space, and where the map is the identity map.
\begin{exparts}
\partsitem In \( \polyspace_3 \) with basis
\( B=\sequence{1+x,1-x,x^2+x^3,x^2-x^3} \) we have this
represenatation.
representation.
\begin{equation*}
\rep{1-x+3x^2-x^3}{B}=
\colvec[r]{0 \\ 1 \\ 1 \\ 2}_B
......@@ -815,7 +815,7 @@ the same space, and where the map is the identity map.
The proof tells us what how the bases change.
We start by swapping the first and second rows
of the representation with respect to $B$ to get a representation
with resepect to a new basis $B_1$.
with respect to a new basis $B_1$.
\begin{equation*}
\rep{1-x+3x^2-x^3}{B_1}=
\colvec[r]{1 \\ 0 \\ 1 \\ 2}_{B_1}
......@@ -1184,7 +1184,7 @@ has been \definend{diagonalized}\index{matrix!diagonalized}
when its representation is diagonal with respect to $B,B$, that is,
with respect to equal starting
and ending bases.
In Chaper Five we shall see which maps and matrices are diagonalizable.
In Chapter Five we shall see which maps and matrices are diagonalizable.
In the rest of this subsection we consider the easier case
where representations are with respect to $B,D$, which are
possibly different starting and ending bases.
......@@ -1223,7 +1223,7 @@ the set of matrices into matrix equivalence classes.
\end{center}
We can get some insight into the classes by comparing matrix equivalence
with row equivalence
(rememeber that matrices are row equivalent when they can be reduced to each
(remember that matrices are row equivalent when they can be reduced to each
other by row operations).
In $\hat{H}=PHQ$, the matrices $P$ and $Q$ are nonsingular and
thus we can write each as a product of elementary reduction matrices
......@@ -1609,7 +1609,7 @@ this is a good classification of linear maps.
\colvec[r]{1 \\ 1}=1\cdot\colvec[r]{1 \\ 0}
+1\cdot\colvec[r]{0 \\ 1}
\end{equation*}
give the other nonsinguar matrix.
give the other nonsingular matrix.
\begin{equation*}
\rep{\identity}{\hat{B},B}=\begin{mat}[r]
0 &1 \\
......
......@@ -22,7 +22,7 @@ is the $\vec{p}$ in the plane with the property that
someone standing on $\vec{p}$ and looking directly up or down sees
$\vec{v}$.
In this section we will generalize this to other projections,
both orthogonal and nonorthogonal.
both orthogonal and non-orthogonal.
......@@ -202,7 +202,7 @@ This subsection has developed a natural projection map, orthogonal projection
into a line.
As suggested by the examples, we use it often in applications.
The next subsection shows how the definition of orthogonal
projection into a line gives us a way to calculate especially convienent bases
projection into a line gives us a way to calculate especially convenient bases
for vector spaces, again something that we often see in applications.
The final subsection completely generalizes projection, orthogonal or not,
into any subspace at all.
......@@ -317,7 +317,7 @@ into any subspace at all.
\partsitem $\colvec[r]{1 \\ 2}$
\partsitem $\colvec[r]{0 \\ 4}$
\end{exparts*}
Show that in general the projection tranformation is this.
Show that in general the projection transformation is this.
\begin{equation*}
\colvec{x_1 \\ x_2}
\mapsto
......@@ -467,7 +467,7 @@ into any subspace at all.
Suppose that \( \vec{v}\in\Re^n \) with \( n>1 \).
If \( \vec{v}\neq\zero \) then we consider the line
\( \ell=\set{c\vec{v}\suchthat c\in\Re} \) and if \( \vec{v}=\zero \)
we take \( \ell \) to be any (nondegenerate) line at all
we take \( \ell \) to be any (non-degenerate) line at all
(actually, we needn't distinguish between these two cases\Dash see
the prior exercise).
Let \( v_1,\dots,v_n \) be the components of \( \vec{v} \);
......@@ -495,7 +495,7 @@ into any subspace at all.
The dimension \( n=0 \) case is the trivial vector space, here
there is only one vector and so it cannot be expressed as the projection
of a different vector.
In the dimension $n=1$ case there is only one (nondegenerate) line,
In the dimension $n=1$ case there is only one (non-degenerate) line,
and every vector is in it, hence every vector is the projection only
of itself.
\end{answer}
......@@ -1244,7 +1244,7 @@ An example is in \nearbyexercise{exer:OrthoRepEasy}.
Find an orthonormal basis for this subspace of $\Re^3$:~the
plane $x-y+z=0$.
\begin{answer}
We can paramatrize the given space can in this way.
We can parametrize the given space can in this way.
\begin{equation*}
\set{\colvec{x \\ y \\ z} \suchthat x=y-z}
=\set{\colvec[r]{1 \\ 1 \\ 0}\cdot y+\colvec[r]{-1 \\ 0 \\ 1}\cdot z
......@@ -1465,7 +1465,7 @@ An example is in \nearbyexercise{exer:OrthoRepEasy}.
meets the vertical dashed line
$\vec{v}-(1\cdot\vec{e}_1+2\cdot\vec{e}_2)$; this is what
first item of this question proved.
The Pythagorean theorem then gives that the hypoteneuse\Dash the
The Pythagorean theorem then gives that the hypotenuse\Dash the
segment from $\vec{v}$ to any other vector\Dash is longer than
the vertical dashed line.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment