Commit c2d1d434 authored by Jim Hefferon's avatar Jim Hefferon

full edits for map6

parent 890c0495
This diff is collapsed.
......@@ -17249,8 +17249,8 @@
=\vec{v}
\end{equation*}
(\textit{Remark}.
If we assume that $\vec{v}\,$ is nonzero then the above is
simplified on taking $\vec{s}\,$ to be $\vec{v}$.)
If we assume that $\vec{v}\,$ is nonzero then we can simplify
the above by taking $\vec{s}\,$ to be $\vec{v}$.)
\partsitem Write $c_{\vec{p}}\vec{s}\,$ for the projection
$\proj{\vec{v}}{\spanof{\vec{s}\,}}$.
Note that, by the assumption that $\vec{v}$ is not in the line,
......@@ -17270,7 +17270,7 @@
$(a_1+a_2)\cdot\vec{v}=a_2c_{\vec{p}}\cdot\vec{s}$.
Because $\vec{v}\,$ isn't in the line, the scalars
$a_1+a_2$ and $a_2 c_{\vec{p}}$ must both be zero.
The $c_{\vec{p}}=0$ case is handled above, so
We handled the $c_{\vec{p}}=0$ case above, so
the remaining case is that $a_2=0$, and
this gives that $a_1=0$ also.
Hence the set is linearly independent.
......@@ -17357,8 +17357,8 @@
\frac{\vec{v}\dotprod\vec{s}}{\vec{s}\dotprod\vec{s}}\cdot\vec{s}
\end{equation*}
the distance squared from the point to the line is this
(a vector dotted with itself $\vec{w}\dotprod\vec{w}$
is written $\vec{w}^2$).
(we write a vector dotted with itself $\vec{w}\dotprod\vec{w}$
as $\vec{w}^2$).
\begin{align*}
\norm{\vec{v}-
\frac{\vec{v}\dotprod\vec{s}}{\vec{s}\dotprod\vec{s}}
......@@ -17670,7 +17670,7 @@
\end{ans}
\begin{ans}{Three.VI.2.12}
The given space can be parametrized in this way.
We can paramatrize the given space can in this way.
\begin{equation*}
\set{\colvec{x \\ y \\ z} \suchthat x=y-z}
=\set{\colvec[r]{1 \\ 1 \\ 0}\cdot y+\colvec[r]{-1 \\ 0 \\ 1}\cdot z
......@@ -17806,10 +17806,17 @@
\end{ans}
\begin{ans}{Three.VI.2.15}
If that set is not linearly independent, then we get a zero vector.
Otherwise (if our set is linearly independent but does not span the
space), we are doing Gram-Schmidt on a set that is a basis for a
subspace and so we get an orthogonal basis for a subspace.
\end{ans}
\begin{ans}{Three.VI.2.16}
The process leaves the basis unchanged.
\end{ans}
\begin{ans}{Three.VI.2.16}
\begin{ans}{Three.VI.2.17}
\begin{exparts}
\partsitem The argument is as in the $i=3$ case of the proof
of \nearbytheorem{th:GramSchmidt}.
......@@ -17837,7 +17844,7 @@
\vec{\kappa}_i\dotprod\vec{\kappa}_i)\right)
\cdot\vec{\kappa}_i$
equals, after all of the cancellation is done, zero).
\partsitem The vector $\vec{v}$ is shown in black and the
\partsitem The vector $\vec{v}$ is in black and the
vector $\proj{\vec{v}\,}{\spanof{\vec{\kappa}_1}}
+\proj{\vec{v}\,}{\spanof{\vec{v}_2}}
=1\cdot\vec{e}_1+2\cdot\vec{e}_2$ is in gray.
......@@ -17849,15 +17856,15 @@
+\proj{\vec{v}\,}{\spanof{\vec{v}_2}})$
lies on the dotted line connecting the black vector to the
gray one, that is, it is orthogonal to the $xy$-plane.
\partsitem This diagram is gotten by following the hint.
\partsitem We get this diagram by following the hint.
\begin{center} \small
\includegraphics{ch3.84}
\end{center}
The dashed triangle has a right angle where
the gray vector $1\cdot\vec{e}_1+2\cdot\vec{e}_2$
meets the vertical dashed line
$\vec{v}-(1\cdot\vec{e}_1+2\cdot\vec{e}_2)$; this is what was
proved in the first item of this question.
$\vec{v}-(1\cdot\vec{e}_1+2\cdot\vec{e}_2)$; this is what
first item of this question proved.
The Pythagorean theorem then gives that the hypoteneuse\Dash the
segment from $\vec{v}$ to any other vector\Dash is longer than
the vertical dashed line.
......@@ -17897,7 +17904,7 @@
\end{exparts}
\end{ans}
\begin{ans}{Three.VI.2.17}
\begin{ans}{Three.VI.2.18}
One way to proceed is to find a third vector so that the three together
make a basis for $\Re^3$, e.g.,
\begin{equation*}
......@@ -17947,9 +17954,9 @@
including the two vectors given in the question.
\end{ans}
\begin{ans}{Three.VI.2.18}
\begin{ans}{Three.VI.2.19}
\begin{exparts}
\partsitem The representation can be done by eye.
\partsitem We can do the representation by eye.
\begin{equation*}
\colvec[r]{2 \\ 3}=3\cdot\colvec[r]{1 \\ 1}+(-1)\cdot\colvec[r]{1 \\ 0}
\qquad
......@@ -17969,7 +17976,7 @@
\cdot\colvec[r]{1 \\ 0}
=\frac{2}{1}\cdot\colvec[r]{1 \\ 0}
\end{equation*}
\partsitem As above, the representation can be done by eye
\partsitem As above, we can do the representation by eye
\begin{equation*}
\colvec[r]{2 \\ 3}=(5/2)\cdot\colvec[r]{1 \\ 1}
+(-1/2)\cdot\colvec[r]{1 \\ -1}
......@@ -18009,7 +18016,7 @@
\end{exparts}
\end{ans}
\begin{ans}{Three.VI.2.19}
\begin{ans}{Three.VI.2.20}
First, $\norm{\vec{v}\,}^2=4^2+3^2+2^2+1^2=50$.
\begin{exparts*}
\partsitem $c_1=4$
......@@ -18085,16 +18092,16 @@
The result now follows on gathering like terms and on recognizing that
$\vec{\kappa}_1\dotprod\vec{\kappa}_1=1$ and
$\vec{\kappa}_2\dotprod\vec{\kappa}_2=1$ because these vectors are
given as members of an orthonormal set.
members of an orthonormal set.
\end{ans}
\begin{ans}{Three.VI.2.20}
\begin{ans}{Three.VI.2.21}
It is true, except for the zero vector.
Every vector in \( \Re^n \) except the zero vector is in a basis, and
that basis can be orthogonalized.
\end{ans}
\begin{ans}{Three.VI.2.21}
\begin{ans}{Three.VI.2.22}
The $\nbyn{3}$ case gives the idea.
The set
\begin{equation*}
......@@ -18147,7 +18154,7 @@
\end{equation*}
\end{ans}
\begin{ans}{Three.VI.2.22}
\begin{ans}{Three.VI.2.23}
If the set is empty then the summation on the left side is the
linear combination of the empty set of vectors,
which by definition adds to the zero vector.
......@@ -18155,7 +18162,7 @@
`if \ldots then \ldots' implication is vacuously true.
\end{ans}
\begin{ans}{Three.VI.2.23}
\begin{ans}{Three.VI.2.24}
\begin{exparts}
\partsitem Part of the induction argument proving
\nearbytheorem{th:GramSchmidt} checks that
......@@ -18176,7 +18183,7 @@
\end{exparts}
\end{ans}
\begin{ans}{Three.VI.2.24}
\begin{ans}{Three.VI.2.25}
For the inductive step, we assume that for all $j$ in~$[1..i]$,
these three conditions are true of each $\vec{\kappa}_j$:
(i)~each $\vec{\kappa}_j$ is nonzero,
......@@ -18342,7 +18349,7 @@
\begin{equation*}
M^\perp=\set{k\cdot\colvec[r]{1 \\ 1}\suchthat k\in\Re}
\end{equation*}
\partsitem As in the answer to the prior part, $M$ can be described as
\partsitem As in the answer to the prior part, we can describe $M$ as
a span
\begin{equation*}
M=\set{c\cdot\colvec[r]{3/2 \\ 1}\suchthat c\in\Re}
......@@ -18737,8 +18744,8 @@
\begin{ans}{Three.VI.3.14}
No, a decomposition of vectors $\vec{v}=\vec{m}+\vec{n}$ into
$\vec{m}\in M$ and $\vec{n}\in N$ does not depend on the bases
chosen for the subspaces\Dash
this was shown in the Direct Sum subsection.
chosen for the subspaces, as we showed
in the Direct Sum subsection.
\end{ans}
\begin{ans}{Three.VI.3.15}
......@@ -18778,7 +18785,7 @@
\end{ans}
\begin{ans}{Three.VI.3.18}
If $V=M\directsum N$ then every vector can be decomposed uniquely as
If $V=M\directsum N$ then every vector decomposes uniquely as
$\vec{v}=\vec{m}+\vec{n}$.
For all $\vec{v}$ the map $p$ gives $p(\vec{v})=\vec{m}$ if and only
if $\vec{v}-p(\vec{v})=\vec{n}$, as required.
......@@ -18980,7 +18987,7 @@
\end{equation*}
we have $\nullspace{f}^\perp
=\spanof{\set{\vec{h}_1,\dots,\vec{h}_m}}$.
(In \cite{Strang93}, this space is described as the
(\cite{Strang93} describes this space as the
transpose of the row space of $H$.)
\end{exparts}
......@@ -19035,7 +19042,7 @@
rewritten as $t(\vec{v}-t(\vec{v}))=\zero$ suggests taking
$\vec{v}=t(\vec{v})+(\vec{v}-t(\vec{v}))$.
So we are finished on taking a basis
To finish we taking a basis
$B=\sequence{\vec{\beta}_1,\ldots,\vec{\beta}_n}$ for $V$ where
$\sequence{\vec{\beta}_1,\ldots,\vec{\beta}_r}$ is a basis for
the rangespace $M$ and
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment