diff --git a/bookans.tex b/bookans.tex index ddecded82ce6943059d5e2491d3423aa79ab8883..21086ccdff789bf549c4c17df2e19fa0c20450ce 100644 --- a/bookans.tex +++ b/bookans.tex @@ -25464,6 +25464,61 @@ octave:6> gplot z \end{ans} \begin{ans}{Four.II.1.12} + \begin{exparts} + \partsitem + Gauss's Method + \begin{equation*} + \grstep[\rho_1+\rho_3]{-3\rho_1+\rho_2} + \begin{mat} + 1 &0 &-1 \\ + 0 &1 &4 \\ + 0 &0 &2 + \end{mat} + \end{equation*} + gives the determinant as~$+2$. + The sign is positive so the transformation preserves orientation. + \partsitem + The size of the box is the value of this determinant. + \begin{equation*} + \begin{vmat} + 1 &2 &1 \\ + -1 &0 &1 \\ + 2 &-1 &0 + \end{vmat} + =+6 + \end{equation*} + The orientation is positive. + \partsitem + Since this transformation is represented by the given matrix with + respect + to the standard bases, and with respect to + the standard basis the vectors represent themselves, + to find the image of the vectors under the transformation just + multiply + them, from the left, by the matrix. + \begin{equation*} + \colvec{1 \\ -1 \\ 2}\mapsto\colvec{-1 \\ 4 \\ 5} + \qquad + \colvec{2 \\ 0 \\ -1}\mapsto\colvec{3 \\ 5 \\ -5} + \qquad + \colvec{1 \\ 1 \\ 0}\mapsto\colvec{1 \\ 4 \\ -1} + \end{equation*} + Then compute the size of the resulting box. + \begin{equation*} + \begin{vmat} + -1 &3 &1 \\ + 4 &5 &4 \\ + 5 &-5 &-1 + \end{vmat} + =+12 + \end{equation*} + The starting box is positively oriented, the transformation + preserves orientations (since the determinant of the matrix is + positive), and the ending box is also positively oriented. + \end{exparts} + +\end{ans} +\begin{ans}{Four.II.1.13} Express each transformation with respect to the standard bases and find the determinant. \begin{exparts*} @@ -25473,17 +25528,17 @@ octave:6> gplot z \end{exparts*} \end{ans} -\begin{ans}{Four.II.1.13} +\begin{ans}{Four.II.1.14} The starting area is $$6$$ and the matrix changes sizes by $$-14$$. Thus the area of the image is $$84$$. \end{ans} -\begin{ans}{Four.II.1.14} +\begin{ans}{Four.II.1.15} By a factor of $$21/2$$. \end{ans} -\begin{ans}{Four.II.1.15} +\begin{ans}{Four.II.1.16} For a box we take a sequence of vectors (as described in the remark, the order of the vectors matters), while for a span we take a set of vectors. @@ -25494,7 +25549,7 @@ octave:6> gplot z span the coefficients are free to range over all of $\Re$. \end{ans} -\begin{ans}{Four.II.1.16} +\begin{ans}{Four.II.1.17} We have drawn that picture to mislead. The picture on the left is not the box formed by two vectors. If we slide it to the origin then it becomes the box formed by @@ -25516,19 +25571,19 @@ octave:6> gplot z which has an area of $4$. \end{ans} -\begin{ans}{Four.II.1.17} +\begin{ans}{Four.II.1.18} Yes to both. For instance, the first is $$\deter{TS}=\deter{T}\cdot\deter{S}= \deter{S}\cdot\deter{T}=\deter{ST}$$. \end{ans} -\begin{ans}{Four.II.1.18} +\begin{ans}{Four.II.1.19} % due to math.stackexchange.com user dgrasines517 Because $\deter{AB}=\deter{A}\cdot\deter{B}=\deter{BA}$ and these two matrices have different determinants. \end{ans} -\begin{ans}{Four.II.1.19} +\begin{ans}{Four.II.1.20} \begin{exparts} \partsitem If it is defined then it is $$(3^2)\cdot (2)\cdot (2^{-2})\cdot (3)$$. @@ -25536,14 +25591,14 @@ octave:6> gplot z \end{exparts} \end{ans} -\begin{ans}{Four.II.1.20} +\begin{ans}{Four.II.1.21} $$\begin{vmat} \cos\theta &-\sin\theta \\ \sin\theta &\cos\theta \end{vmat}=1$$ \end{ans} -\begin{ans}{Four.II.1.21} +\begin{ans}{Four.II.1.22} No, for instance the determinant of \begin{equation*} T=\begin{mat}[r] @@ -25555,11 +25610,11 @@ octave:6> gplot z has length $$2$$. \end{ans} -\begin{ans}{Four.II.1.22} +\begin{ans}{Four.II.1.23} It is zero. \end{ans} -\begin{ans}{Four.II.1.23} +\begin{ans}{Four.II.1.24} Two of the three sides of the triangle are formed by these vectors. \begin{equation*} \colvec[r]{2 \\ 2 \\ 2}-\colvec[r]{1 \\ 2 \\ 1}=\colvec[r]{1 \\ 0 \\ 1} @@ -25610,7 +25665,7 @@ octave:6> gplot z \end{equation*} \end{ans} -\begin{ans}{Four.II.1.24} +\begin{ans}{Four.II.1.25} \begin{exparts} \partsitem Because the image of a linearly dependent set is linearly dependent, @@ -25661,7 +25716,7 @@ octave:6> gplot z \end{exparts} \end{ans} -\begin{ans}{Four.II.1.25} +\begin{ans}{Four.II.1.26} Any permutation matrix has the property that the transpose of the matrix is its inverse. @@ -25678,19 +25733,19 @@ octave:6> gplot z \end{equation*} \end{ans} -\begin{ans}{Four.II.1.26} +\begin{ans}{Four.II.1.27} Where the sides of the box are $$c$$ times longer, the box has $$c^3$$ times as many cubic units of volume. \end{ans} -\begin{ans}{Four.II.1.27} +\begin{ans}{Four.II.1.28} If $$H=P^{-1}GP$$ then $$\deter{H}=\deter{P^{-1}}\deter{G}\deter{P} =\deter{P^{-1}}\deter{P}\deter{G}=\deter{P^{-1}P}\deter{G} =\deter{G}$$. \end{ans} -\begin{ans}{Four.II.1.28} +\begin{ans}{Four.II.1.29} \begin{exparts} \partsitem The new basis is the old basis rotated by $$\pi/4$$. \partsitem @@ -25738,7 +25793,7 @@ octave:6> gplot z \end{exparts} \end{ans} -\begin{ans}{Four.II.1.29} +\begin{ans}{Four.II.1.30} We will compare $$\det(\vec{s}_1,\dots,\vec{s}_n)$$ with $$\det(t(\vec{s}_1),\dots,t(\vec{s}_n))$$ to show that the second differs from the first by a factor of $\deter{T}$. @@ -25832,7 +25887,7 @@ octave:6> gplot z \end{equation*} \end{ans} -\begin{ans}{Four.II.1.30} +\begin{ans}{Four.II.1.31} \begin{exparts} \partsitem An algebraic check is easy. \begin{equation*} @@ -27231,9 +27286,103 @@ octave:6> gplot z \end{ans} \begin{ans}{Five.II.1.7} - Gauss's Method shows that - the first matrix represents maps of rank two while the second - matrix represents maps of rank three. + \begin{exparts} + \partsitem + \begin{equation*} + \begin{CD} + \C^3_{\wrt{B}} @>t>T> \C^3_{\wrt{B}} \\ + @V{\scriptstyle\identity} VV @V{\scriptstyle\identity} VV \\ + \C^3_{\wrt{D}} @>t>\hat{T}> \C^3_{\wrt{D}} + \end{CD} + \end{equation*} + \partsitem + For each element of the starting basis~$B$ find the effect of + the transformation + \begin{equation*} + \colvec{1 \\ 2 \\ 3}\mapsunder{t}\colvec{-2 \\ 3 \\ 4} + \qquad + \colvec{0 \\ 1 \\ 0}\mapsunder{t}\colvec{0 \\ 0 \\ 2} + \qquad + \colvec{0 \\ 0 \\ 1}\mapsunder{t}\colvec{-1 \\ 1 \\ 0} + \end{equation*} + and represented those outputs with respect to the ending basis~$B$ + \begin{equation*} + \rep{\colvec{-2 \\ 3 \\ 4}}{B}=\colvec{-2 \\ 7 \\ 10} + \qquad + \rep{\colvec{0 \\ 0 \\ 2}}{B}=\colvec{0 \\ 0 \\ 2} + \qquad + \rep{\colvec{-1 \\ 1 \\ 0}}{B}=\colvec{-1 \\ 3 \\ 3} + \end{equation*} + to get the matrix. + \begin{equation*} + T=\rep{t}{B,B}= + \begin{mat} + -2 &0 &-1 \\ + 7 &0 &3 \\ + 10 &2 &3 + \end{mat} + \end{equation*} + \partsitem + Find the effect of the transformation on the elements of~$D$ + \begin{equation*} + \colvec{1 \\ 0 \\ 0}\mapsunder{t}\colvec{1 \\ 0 \\ 0} + \qquad + \colvec{1 \\ 1 \\ 0}\mapsunder{t}\colvec{1 \\ 0 \\ 2} + \qquad + \colvec{1 \\ 0 \\ 1}\mapsunder{t}\colvec{0 \\ 1 \\ 0} + \end{equation*} + and represented those with respect to the ending basis~$D$ + \begin{equation*} + \rep{\colvec{1 \\ 0 \\ 0}}{D}=\colvec{1 \\ 0 \\ 0} + \qquad + \rep{\colvec{1 \\ 0 \\ 2}}{D}=\colvec{-1 \\ 0 \\ 2} + \qquad + \rep{\colvec{0 \\ 1 \\ 0}}{D}=\colvec{-1 \\ 1 \\ 0} + \end{equation*} + to get the matrix. + \begin{equation*} + \hat{T}=\rep{t}{D,D}= + \begin{mat} + 1 &-1 &-1 \\ + 0 &0 &1 \\ + 0 &2 &0 + \end{mat} + \end{equation*} + \partsitem + To go down on the right we need + $\rep{\identity}{B,D}$ + so we first compute the effect of the identity map on each element + of~$D$, + which is no effect, and then represent the results with respect + to~$B$. + \begin{equation*} + \rep{\colvec{1 \\ 2 \\ 3}}{D}=\colvec{-4 \\ 2 \\ 3} + \qquad + \rep{\colvec{0 \\ 1 \\ 0}}{D}=\colvec{-1 \\ 1 \\ 0} + \qquad + \rep{\colvec{0 \\ 0 \\ 1}}{D}=\colvec{-1 \\ 0 \\ 1} + \end{equation*} + So this is~$P$. + \begin{equation*} + P= + \begin{mat} + -4 &-1 &-1 \\ + 2 &1 &0 \\ + 3 &0 &1 + \end{mat} + \end{equation*} + For the other matrix~$\rep{\identity}{D,B}$ we can either find + it directly, as we just have with~$P$, or we can do the + usual calculation of a matrix inverse. + \begin{equation*} + P^{-1}= + \begin{mat} + 1 &1 &1 \\ + -2 &-1 &-2 \\ + -3 &-3 &-2 + \end{mat} + \end{equation*} + \end{exparts} \end{ans} \begin{ans}{Five.II.1.8} @@ -27435,6 +27584,12 @@ octave:6> gplot z \end{ans} \begin{ans}{Five.II.1.10} + Gauss's Method shows that + the first matrix represents maps of rank two while the second + matrix represents maps of rank three. + +\end{ans} +\begin{ans}{Five.II.1.11} The only representation of a zero map is a zero matrix, no matter what the pair of bases $\rep{z}{B,D}=Z$, and so in particular for any single basis $B$ we have $\rep{z}{B,B}=Z$. @@ -27447,18 +27602,18 @@ octave:6> gplot z respect to some $B,D$.) \end{ans} -\begin{ans}{Five.II.1.11} +\begin{ans}{Five.II.1.12} No. If $$A=PBP^{-1}$$ then $$A^2=(PBP^{-1})(PBP^{-1})=PB^2P^{-1}$$. \end{ans} -\begin{ans}{Five.II.1.12} +\begin{ans}{Five.II.1.13} Matrix similarity is a special case of matrix equivalence (if matrices are similar then they are matrix equivalent) and matrix equivalence preserves nonsingularity. \end{ans} -\begin{ans}{Five.II.1.13} +\begin{ans}{Five.II.1.14} A matrix is similar to itself; take $$P$$ to be the identity matrix:~$P=IPI^{-1}=IPI$. @@ -27474,7 +27629,7 @@ octave:6> gplot z is similar to $$U$$. \end{ans} -\begin{ans}{Five.II.1.14} +\begin{ans}{Five.II.1.15} Let $f_x$ and $f_y$ be the reflection maps (sometimes called flip's). For any bases $$B$$ and $$D$$, the matrices $$\rep{f_x}{B,B}$$ and @@ -27545,7 +27700,7 @@ octave:6> gplot z similar. \end{ans} -\begin{ans}{Five.II.1.15} +\begin{ans}{Five.II.1.16} We must show that if two matrices are similar then they have the same determinant and the same rank. Both determinant and rank are properties of matrices that @@ -27574,7 +27729,7 @@ octave:6> gplot z The argument for rank is much the same. \end{ans} -\begin{ans}{Five.II.1.16} +\begin{ans}{Five.II.1.17} The matrix equivalence class containing all $$\nbyn{n}$$ rank zero matrices contains only a single matrix, the zero matrix. Therefore it has as a subset only one similarity class. @@ -27590,7 +27745,7 @@ octave:6> gplot z infinitely many similarity classes. \end{ans} -\begin{ans}{Five.II.1.17} +\begin{ans}{Five.II.1.18} Yes, these are similar \begin{equation*} \begin{mat}[r] @@ -27609,7 +27764,7 @@ octave:6> gplot z $D=\sequence{\vec{\beta}_2,\vec{\beta}_1}$. \end{ans} -\begin{ans}{Five.II.1.18} +\begin{ans}{Five.II.1.19} The $$k$$-th powers are similar because, where each matrix represents the map $t$, the $k$-th powers represent $$t^k$$, the composition of $k$-many $t$'s. @@ -27628,7 +27783,7 @@ octave:6> gplot z Other negative powers are now given by the first paragraph. \end{ans} -\begin{ans}{Five.II.1.19} +\begin{ans}{Five.II.1.20} In conceptual terms, both represent $$p(t)$$ for some transformation $$t$$. In computational terms, we have this. @@ -27641,7 +27796,7 @@ octave:6> gplot z \end{align*} \end{ans} -\begin{ans}{Five.II.1.20} +\begin{ans}{Five.II.1.21} There are two equivalence classes, (i)~the class of rank~zero matrices, of which there is one: $\mathscr{C}_1=\set{(0)}$, @@ -27666,7 +27821,7 @@ octave:6> gplot z $(k)$ for $k\neq0$. \end{ans} -\begin{ans}{Five.II.1.21} +\begin{ans}{Five.II.1.22} No. Here is an example that has two pairs, each of two similar matrices: \begin{equation*} @@ -27745,7 +27900,7 @@ octave:6> gplot z since the zero matrix is similar only to itself. \end{ans} -\begin{ans}{Five.II.1.22} +\begin{ans}{Five.II.1.23} If $$N=P(T-\lambda I)P^{-1}$$ then $$N=PTP^{-1}-P(\lambda I)P^{-1}$$. The diagonal matrix $$\lambda I$$ commutes with anything, so diff --git a/det2.tex b/det2.tex index b627fa6ce12bbe3a657cd8557d60703d227b5e4a..4cc1c1d539c32435916dd85034bc17c5d7735b28 100644 --- a/det2.tex +++ b/det2.tex @@ -414,7 +414,88 @@ $1=\deter{I}=\deter{TT^{-1}}=\deter{T}\cdot\deter{T^{-1}}$ \partsitem $1/9$ \end{exparts*} \end{answer} - \recommended \item + \recommended \item + Consider the linear transformation of~$\Re^3$ + represented with respect to the + standard bases by this matrix. + \begin{equation*} + \begin{mat} + 1 &0 &-1 \\ + 3 &1 &1 \\ + -1 &0 &3 + \end{mat} + \end{equation*} + \begin{exparts} + \partsitem Compute the determinant of the matrix. + Does the transformation preserve orientation or reverse it? + \partsitem Find the size of the box defined by these vectors. + What is its orientation? + \begin{equation*} + \colvec{1 \\ -1 \\ 2} + \quad + \colvec{2 \\ 0 \\ -1} + \quad + \colvec{1 \\ 1 \\ 0} + \end{equation*} + \partsitem Find the images under $t$ of the vectors in the prior item and + find the size of the box that they define. + What is the orientation? + \end{exparts} + \begin{answer} + \begin{exparts} + \partsitem + Gauss's Method + \begin{equation*} + \grstep[\rho_1+\rho_3]{-3\rho_1+\rho_2} + \begin{mat} + 1 &0 &-1 \\ + 0 &1 &4 \\ + 0 &0 &2 + \end{mat} + \end{equation*} + gives the determinant as~$+2$. + The sign is positive so the transformation preserves orientation. + \partsitem + The size of the box is the value of this determinant. + \begin{equation*} + \begin{vmat} + 1 &2 &1 \\ + -1 &0 &1 \\ + 2 &-1 &0 + \end{vmat} + =+6 + \end{equation*} + The orientation is positive. + \partsitem + Since this transformation is represented by the given matrix with + respect + to the standard bases, and with respect to + the standard basis the vectors represent themselves, + to find the image of the vectors under the transformation just + multiply + them, from the left, by the matrix. + \begin{equation*} + \colvec{1 \\ -1 \\ 2}\mapsto\colvec{-1 \\ 4 \\ 5} + \qquad + \colvec{2 \\ 0 \\ -1}\mapsto\colvec{3 \\ 5 \\ -5} + \qquad + \colvec{1 \\ 1 \\ 0}\mapsto\colvec{1 \\ 4 \\ -1} + \end{equation*} + Then compute the size of the resulting box. + \begin{equation*} + \begin{vmat} + -1 &3 &1 \\ + 4 &5 &4 \\ + 5 &-5 &-1 + \end{vmat} + =+12 + \end{equation*} + The starting box is positively oriented, the transformation + preserves orientations (since the determinant of the matrix is + positive), and the ending box is also positively oriented. + \end{exparts} + \end{answer} + \item By what factor does each transformation change the size of boxes? \begin{exparts*} diff --git a/jc1.tex b/jc1.tex index 5d5942cbd0f74888af3d534bb3e07ea7ffe68ea7..8eb9bd43e06adf89f66bfe6e56911c0115abcbad 100644 --- a/jc1.tex +++ b/jc1.tex @@ -358,6 +358,9 @@ the \definend{standard basis}\index{standard basis}\index{basis!standard}% \index{standard basis!complex number scalars} for $$\C^n$$ as a vector space over $\C$ and again denote it $$\stdbasis_n$$. +Another example is that +$\polyspace_n$ will be the vector space of degree~$n$ polynomials with +coefficients that are complex. diff --git a/jc2.tex b/jc2.tex index 8c89165240ef00a7282ec8a5887291a4ac93946c..9a1d993a46e84bbe746ee18867cc1bdd6681c079 100644 --- a/jc2.tex +++ b/jc2.tex @@ -41,65 +41,12 @@ In matrix terms, \subsection{Definition and Examples} -\begin{definition} \label{df:Similar} -%<*df:Similar> -The matrices $$T$$ and $\hat{T}$ are -\definend{similar}\index{matrix!similarity}% -\index{equivalence relation!matrix similarity}\index{similar matrices} -if there is a nonsingular $$P$$ such that -$- \hat{T}=PTP^{-1} -$. -% -\end{definition} - -\noindent Since nonsingular matrices are square, -$T$ and $\hat{T}$ must -be square and of the same size. -\nearbyexercise{exer:SimIsEquivRel} checks that -similarity is an equivalence relation. - -\begin{example} -Calculation with these two -\begin{equation*} - P= - \begin{mat}[r] - 2 &1 \\ - 1 &1 - \end{mat} - \qquad - T= - \begin{mat}[r] - 2 &-3 \\ - 1 &-1 - \end{mat} -\end{equation*} -gives that $T$ is similar to this matrix. -\begin{equation*} - \hat{T}= - \begin{mat}[r] - 12 &-19 \\ - 7 &-11 - \end{mat} -\end{equation*} -\end{example} - -\begin{example} \label{ex:OnlyZeroSimToZero} -%<*ex:OnlyZeroSimToZero> -The only matrix similar to the zero matrix is itself:~$PZP^{-1}=PZ=Z$. -The identity matrix has the same property:~$PIP^{-1}=PP^{-1}=I$. -% -\end{example} - \begin{example} Consider the derivative transformation $\map{d/dx}{\polyspace_2}{\polyspace_2}$, -and two bases for that space. -\begin{equation*} - B=\sequence{x^2,x,1} - \qquad - D=\sequence{1,1+x,1+x^2} -\end{equation*} +and two bases for that space +$B=\sequence{x^2,x,1}$ and +$D=\sequence{1,1+x,1+x^2}$ We will compute the four sides of the arrow square. \begin{equation*} \begin{CD} @@ -108,7 +55,7 @@ We will compute the four sides of the arrow square. {\polyspace_2\,}_{\wrt{D}} @>d/dx>\hat{T}> {\polyspace_2\,}_{\wrt{D}} \end{CD} \end{equation*} -First the top. +The top is first. The effect of the transformation on the starting basis~$B$ \begin{equation*} x^2\mapsunder{d/dx} 2x @@ -161,7 +108,7 @@ gives the matrix~$\hat{T}$. Third, computing the matrix for the right-hand side involves finding the effect of the identity map on the elements of~$B$. Of course, the identity map does not transform them at all so -to find $\rep{id}{B,D}$ we represent $B$'s elements with respect +to find the matrix we represent $B$'s elements with respect to~$D$. \begin{equation*} \rep{x^2}{D}=\colvec{-1 \\ 0 \\ 1} @@ -180,9 +127,9 @@ So the matrix for going down the right side is the concatenation of those. \end{mat} \end{equation*} -With that, we can compute in two ways the matrix for going up on +With that, we have two options to compute the matrix for going up on left side. -Fpr direct computation, represent elements of~$D$ with +The direct computation represents elements of~$D$ with respect to~$B$ \begin{equation*} \rep{1}{B}=\colvec{0 \\ 0 \\ 1} @@ -191,7 +138,7 @@ respect to~$B$ \quad \rep{1+x^2}{B}=\colvec{1 \\ 0 \\ 1} \end{equation*} -and concatenate to make the matrix. +and concatenates to make the matrix. \begin{equation*} \begin{mat} 0 &0 &1 \\ @@ -199,8 +146,8 @@ and concatenate to make the matrix. 1 &1 &1 \end{mat} \end{equation*} -The other way to compute the matrix for going up on the left -is to find it as the inverse of the matrix~$P$ for +The other option to compute the matrix for going up on the left +is to take the inverse of the matrix~$P$ for going down on the right. \begin{equation*} \begin{pmat}{ccc|ccc} @@ -222,10 +169,60 @@ going down on the right. 0 &0 &1 &1 &1 &1 \end{pmat} \end{equation*} -The definition expresses the relationship in the second -way, as $\hat{T}=PTP^{-1}$. \end{example} +\begin{definition} \label{df:Similar} +%<*df:Similar> +The matrices $$T$$ and $\hat{T}$ are +\definend{similar}\index{matrix!similarity}% +\index{equivalence relation!matrix similarity}\index{similar matrices} +if there is a nonsingular $$P$$ such that +$+ \hat{T}=PTP^{-1} +$. +% +\end{definition} + +\noindent Since nonsingular matrices are square, +$T$ and $\hat{T}$ must +be square and of the same size. +\nearbyexercise{exer:SimIsEquivRel} checks that +similarity is an equivalence relation. + +\begin{example} +The definition does not require that we consider a map. +Calculation with these two +\begin{equation*} + P= + \begin{mat}[r] + 2 &1 \\ + 1 &1 + \end{mat} + \qquad + T= + \begin{mat}[r] + 2 &-3 \\ + 1 &-1 + \end{mat} +\end{equation*} +gives that $T$ is similar to this matrix. +\begin{equation*} + \hat{T}= + \begin{mat}[r] + 12 &-19 \\ + 7 &-11 + \end{mat} +\end{equation*} +\end{example} + +\begin{example} \label{ex:OnlyZeroSimToZero} +%<*ex:OnlyZeroSimToZero> +The only matrix similar to the zero matrix is itself:~$PZP^{-1}=PZ=Z$. +The identity matrix has the same property:~$PIP^{-1}=PP^{-1}=I$. +% +\end{example} + + Matrix similarity is a special case of matrix equivalence so if two matrices are similar then they are matrix equivalent. What about the converse:~if they are square, @@ -329,7 +326,7 @@ if and only if they have the same rank). \end{mat} \end{multline*} \end{answer} - \recommended \item + \item \nearbyexample{ex:OnlyZeroSimToZero} shows that the only matrix similar to a zero matrix is itself and that the only matrix similar to the identity @@ -375,25 +372,127 @@ if and only if they have the same rank). \end{equation*} \end{exparts} \end{answer} - \recommended \item - Show that these matrices are not similar. + \recommended \item Consider this transformation of~$\C^3$ \begin{equation*} - \begin{mat}[r] - 1 &0 &4 \\ - 1 &1 &3 \\ - 2 &1 &7 - \end{mat} - \qquad - \begin{mat}[r] - 1 &0 &1 \\ - 0 &1 &1 \\ - 3 &1 &2 - \end{mat} + t(\colvec{x \\ y \\ z})=\colvec{x-z \\ z \\ 2y} + \end{equation*} + and these bases. + \begin{equation*} + B=\sequence{\colvec{1 \\ 2 \\ 3}, + \colvec{0 \\ 1 \\ 0}, + \colvec{0 \\ 0 \\ 1}} + \qquad + D=\sequence{\colvec{1 \\ 0 \\ 0}, + \colvec{1 \\ 1 \\ 0}, + \colvec{1 \\ 0 \\ 1}} \end{equation*} + We will compute the parts of the arrow diagram to + represent the transformation using two similar matrices. + \begin{exparts} + \partsitem Draw the arrow diagram, specialized for this case. + \partsitem Compute $T=\rep{t}{B,B}$. + \partsitem Compute $\hat{T}=\rep{t}{D,D}$. + \partsitem Compute the matrices for other the two sides of the arrow + square. + \end{exparts} \begin{answer} - Gauss's Method shows that - the first matrix represents maps of rank two while the second - matrix represents maps of rank three. + \begin{exparts} + \partsitem + \begin{equation*} + \begin{CD} + \C^3_{\wrt{B}} @>t>T> \C^3_{\wrt{B}} \\ + @V{\scriptstyle\identity} VV @V{\scriptstyle\identity} VV \\ + \C^3_{\wrt{D}} @>t>\hat{T}> \C^3_{\wrt{D}} + \end{CD} + \end{equation*} + \partsitem + For each element of the starting basis~$B$ find the effect of + the transformation + \begin{equation*} + \colvec{1 \\ 2 \\ 3}\mapsunder{t}\colvec{-2 \\ 3 \\ 4} + \qquad + \colvec{0 \\ 1 \\ 0}\mapsunder{t}\colvec{0 \\ 0 \\ 2} + \qquad + \colvec{0 \\ 0 \\ 1}\mapsunder{t}\colvec{-1 \\ 1 \\ 0} + \end{equation*} + and represented those outputs with respect to the ending basis~$B$ + \begin{equation*} + \rep{\colvec{-2 \\ 3 \\ 4}}{B}=\colvec{-2 \\ 7 \\ 10} + \qquad + \rep{\colvec{0 \\ 0 \\ 2}}{B}=\colvec{0 \\ 0 \\ 2} + \qquad + \rep{\colvec{-1 \\ 1 \\ 0}}{B}=\colvec{-1 \\ 3 \\ 3} + \end{equation*} + to get the matrix. + \begin{equation*} + T=\rep{t}{B,B}= + \begin{mat} + -2 &0 &-1 \\ + 7 &0 &3 \\ + 10 &2 &3 + \end{mat} + \end{equation*} + \partsitem + Find the effect of the transformation on the elements of~$D$ + \begin{equation*} + \colvec{1 \\ 0 \\ 0}\mapsunder{t}\colvec{1 \\ 0 \\ 0} + \qquad + \colvec{1 \\ 1 \\ 0}\mapsunder{t}\colvec{1 \\ 0 \\ 2} + \qquad + \colvec{1 \\ 0 \\ 1}\mapsunder{t}\colvec{0 \\ 1 \\ 0} + \end{equation*} + and represented those with respect to the ending basis~$D$ + \begin{equation*} + \rep{\colvec{1 \\ 0 \\ 0}}{D}=\colvec{1 \\ 0 \\ 0} + \qquad + \rep{\colvec{1 \\ 0 \\ 2}}{D}=\colvec{-1 \\ 0 \\ 2} + \qquad + \rep{\colvec{0 \\ 1 \\ 0}}{D}=\colvec{-1 \\ 1 \\ 0} + \end{equation*} + to get the matrix. + \begin{equation*} + \hat{T}=\rep{t}{D,D}= + \begin{mat} + 1 &-1 &-1 \\ + 0 &0 &1 \\ + 0 &2 &0 + \end{mat} + \end{equation*} + \partsitem + To go down on the right we need + $\rep{\identity}{B,D}$ + so we first compute the effect of the identity map on each element + of~$D$, + which is no effect, and then represent the results with respect + to~$B$. + \begin{equation*} + \rep{\colvec{1 \\ 2 \\ 3}}{D}=\colvec{-4 \\ 2 \\ 3} + \qquad + \rep{\colvec{0 \\ 1 \\ 0}}{D}=\colvec{-1 \\ 1 \\ 0} + \qquad + \rep{\colvec{0 \\ 0 \\ 1}}{D}=\colvec{-1 \\ 0 \\ 1} + \end{equation*} + So this is~$P$. + \begin{equation*} + P= + \begin{mat} + -4 &-1 &-1 \\ + 2 &1 &0 \\ + 3 &0 &1 + \end{mat} + \end{equation*} + For the other matrix~$\rep{\identity}{D,B}$ we can either find + it directly, as we just have with~$P$, or we can do the + usual calculation of a matrix inverse. + \begin{equation*} + P^{-1}= + \begin{mat} + 1 &1 &1 \\ + -2 &-1 &-2 \\ + -3 &-3 &-2 + \end{mat} + \end{equation*} + \end{exparts} \end{answer} \item Consider the transformation $\map{t}{\polyspace_2}{\polyspace_2}$ @@ -474,15 +573,15 @@ if and only if they have the same rank). \end{exparts} \end{answer} \recommended \item - Exhibit an nontrivial similarity relationship in this way:~let + Exhibit an nontrivial similarity relationship by letting $$\map{t}{\C^2}{\C^2}$$ act in this way, \begin{equation*} \colvec[r]{1 \\ 2}\mapsto\colvec[r]{3 \\ 0} \qquad \colvec[r]{-1 \\ 1}\mapsto\colvec[r]{-1 \\ 2} \end{equation*} - and pick two bases, - and represent $$t$$ with respect to them + picking two bases~$B,D$, + and representing $$t$$ with respect to them $$\hat{T}=\rep{t}{B,B}$$ and $$T=\rep{t}{D,D}$$. Then compute the $$P$$ and $$P^{-1}$$ to change bases from $$B$$ to $$D$$ and @@ -614,6 +713,26 @@ if and only if they have the same rank). \end{mat} \end{equation*} \end{answer} + \recommended \item + Show that these matrices are not similar. + \begin{equation*} + \begin{mat}[r] + 1 &0 &4 \\ + 1 &1 &3 \\ + 2 &1 &7 + \end{mat} + \qquad + \begin{mat}[r] + 1 &0 &1 \\ + 0 &1 &1 \\ + 3 &1 &2 + \end{mat} + \end{equation*} + \begin{answer} + Gauss's Method shows that + the first matrix represents maps of rank two while the second + matrix represents maps of rank three. + \end{answer} \item Explain \nearbyexample{ex:OnlyZeroSimToZero} in terms of maps. \begin{answer} diff --git a/slides/five_ii.tex b/slides/five_ii.tex index fb5868e4724c04af9d58bb72018bf69f34bddc58..ec2f993c1c77d6a6b99dee35e95d247294d11596 100644 --- a/slides/five_ii.tex +++ b/slides/five_ii.tex @@ -103,11 +103,11 @@ So we have this matrix representation of the map. The matrix changing bases from $B$ to $D$ is $\rep{\identity}{B,D}$. We find these by eye \begin{equation*} - \rep{1}{D}=\colvec{1 \\ 0 \\ 0} + \rep{\identity(1)}{D}=\colvec{1 \\ 0 \\ 0} \quad - \rep{x}{D}=\colvec{-1 \\ 1 \\ 0} + \rep{\identity(x)}{D}=\colvec{-1 \\ 1 \\ 0} \quad - \rep{x^2}{D}=\colvec{0 \\ -1 \\ 1} + \rep{\identity(x^2)}{D}=\colvec{0 \\ -1 \\ 1} \end{equation*} to get this. \begin{equation*} @@ -144,7 +144,7 @@ To check that, and to underline what the arrow diagram says V_{\wrt{D}} @>t>\hat{T}> V_{\wrt{D}} \end{CD} \end{equation*} -we calculate $T$ directly. +we calculate $\hat{T}$ directly. The effect of the map on the basis elements is $d/dx(1)=0$, $d/dx(1+x)=1$, and $d/dx(1+x+x^2)=1+2x$. Representing of those with respect to $D$ @@ -155,7 +155,7 @@ Representing of those with respect to $D$ \quad \rep{1+2x}{D}=\colvec{-1 \\ 2 \\ 0} \end{equation*} -gives the same matrix $\hat{T}=\rep{d/dx}{D,D}$ as we found above. +gives the same matrix $\hat{T}=\rep{d/dx}{D,D}$ as above. \end{frame} \begin{frame} The definition doesn't require that we consider the underlying maps. @@ -455,115 +455,6 @@ Not every vector is simply rescaled. -\begin{frame} -Matrices that are similar have the same eigenvalues, but -needn't have the same eigenvectors. - -\ex -These two are similar -\begin{equation*} - T= - \begin{mat} - 4 &0 &0 \\ - 0 &8 &0 \\ - 0 &0 &12 - \end{mat} - \qquad - S= - \begin{mat}[r] - 6 &-1 &-1 \\ - 2 &11 &-1 \\ - -6 &-5 &7 - \end{mat} -\end{equation*} -since $S=PTP^{-1}$ for this $P$. -\begin{equation*} - P= - \begin{mat}[r] - 1 &-1 &0 \\ - 0 &1 &-1 \\ - 2 &1 &1 - \end{mat} - \qquad - P^{-1}= - \begin{mat}[r] - 1/2 &1/4 &1/4 \\ - -1/2 &1/4 &1/4 \\ - -1/2 &-3/4 &1/4 - \end{mat} -\end{equation*} -\end{frame} -\begin{frame} -\noindent Suppose that $\map{t}{\C^3}{\C^3}$ is -represented by $T$ with respect to the standard basis. -Then this is the action of $t$. -\begin{equation*} - \colvec{x \\ y \\ z}\mapsunder{t}\colvec{4x \\ 8y \\ 12z} -\end{equation*} -\pause -By eye we see that three -eigenvalues of~$t$ are $\lambda_1=4$, $\lambda_2=8$, and~$\lambda_3=12$. -For instance this holds. -\begin{equation*} - T\cdot\colvec{1 \\ 0 \\ 0} - =\begin{mat} - 4 &0 &0 \\ - 0 &8 &0 \\ - 0 &0 &12 - \end{mat}\colvec{1 \\ 0 \\ 0} - =4\cdot\colvec{1 \\ 0 \\ 0} -\end{equation*} -\end{frame} -\begin{frame} -Contrast that with $S=PTP^{-1}$, which represents the same function, but -with respect to a different basis. -\begin{equation*} - \begin{CD} - V_{\wrt{\stdbasis_3}} @>t>T> V_{\wrt{\stdbasis_3}} \\ - @V{\scriptstyle\identity} VV @V{\scriptstyle\identity} VV \\ - V_{\wrt{B}} @>t>S> V_{\wrt{B}} - \end{CD} -\end{equation*} -We can easily find the basis~$B$. -Since $P^{-1}=\rep{\identity}{B,\stdbasis_3}$, its first column is -$\rep{\identity(\vec{\beta}_1)}{\stdbasis_3}=\rep{\vec{\beta}_1}{\stdbasis_3}$. -With respect to the standard basis any vector is represented by itself -so the first basis element $\vec{\beta}_1$ is the first column of $P^{-1}$. -The same goes for the other two columns. -\begin{equation*} - B=\sequence{\colvec[r]{1/2 \\ -1/2 \\ -1/2}, - \colvec[r]{1/4 \\ 1/4 \\ -3/4}, - \colvec[r]{1/4 \\ 1/4 \\ 1/4}} -\end{equation*} -\end{frame} -\begin{frame} -% We know that the transformation~$t$ has eigenvalues of $4$, $8$, and~$12$. -% For instance $t(\vec{e}_1)=4\vec{e}_1$. -Now, since each represents the transformation~$t$, the matrices~$T$ and $S$ -reflect the same action $\vec{e}_1\mapsto4\vec{e}_1$. -\begin{align*} - &\rep{t}{\stdbasis_3,\stdbasis_3}\cdot\rep{\vec{e}_1}{\stdbasis_3} - =T\cdot\rep{\vec{e}_1}{\stdbasis_3} - =4\cdot\rep{\vec{e}_1}{\stdbasis_3} \\ - &\rep{t}{B,B}\cdot\rep{\vec{e}_1}{B} - =S\cdot\rep{\vec{e}_1}{B} - =4\cdot\rep{\vec{e}_1}{B} -\end{align*} -But, while in the two equations the $4$'s are the same, the vectors -representations are not. -\begin{align*} - T\cdot\rep{\vec{e}_1}{\stdbasis_3} - =T\colvec{1 \\ 0 \\ 0} - &=4\cdot\colvec{1 \\ 0 \\ 0} \\ - S\cdot\rep{\vec{e}_1}{B} - =S\cdot\colvec{1 \\ 0 \\ 2} - &=4\cdot\colvec{1 \\ 0 \\ 2} -\end{align*} -So the two matrices have the same eigenvalues but different eigenvectors. -\end{frame} - - - \begin{frame}{Computing eigenvalues and eigenvectors} \ex @@ -791,6 +682,121 @@ These are for $\lambda_2=2$. \end{frame} +\begin{frame} +Matrices that are similar have the same eigenvalues, but +needn't have the same eigenvectors. + +\ex +These two are similar +\begin{equation*} + T= + \begin{mat} + 4 &0 &0 \\ + 0 &8 &0 \\ + 0 &0 &12 + \end{mat} + \qquad + S= + \begin{mat}[r] + 6 &-1 &-1 \\ + 2 &11 &-1 \\ + -6 &-5 &7 + \end{mat} +\end{equation*} +since $S=PTP^{-1}$ for this $P$. +\begin{equation*} + P= + \begin{mat}[r] + 1 &-1 &0 \\ + 0 &1 &-1 \\ + 2 &1 &1 + \end{mat} + \qquad + P^{-1}= + \begin{mat}[r] + 1/2 &1/4 &1/4 \\ + -1/2 &1/4 &1/4 \\ + -1/2 &-3/4 &1/4 + \end{mat} +\end{equation*} +For the first matrix +\begin{equation*} + \colvec{1 \\ 0 \\ 0} +\end{equation*} +is an eigenvector associated with the eigenvalue~$4$ but +that does not hold for the second matrix. +\end{frame} +% \begin{frame} +% \noindent Suppose that $\map{t}{\C^3}{\C^3}$ is +% represented by $T$ with respect to the standard basis. +% Then this is the action of $t$. +% \begin{equation*} +% \colvec{x \\ y \\ z}\mapsunder{t}\colvec{4x \\ 8y \\ 12z} +% \end{equation*} +% \pause +% By eye we see that three +% eigenvalues of~$t$ are $\lambda_1=4$, $\lambda_2=8$, and~$\lambda_3=12$. +% For instance this holds. +% \begin{equation*} +% T\cdot\colvec{1 \\ 0 \\ 0} +% =\begin{mat} +% 4 &0 &0 \\ +% 0 &8 &0 \\ +% 0 &0 &12 +% \end{mat}\colvec{1 \\ 0 \\ 0} +% =4\cdot\colvec{1 \\ 0 \\ 0} +% \end{equation*} +% \end{frame} +% \begin{frame} +% Contrast that with $S=PTP^{-1}$, which represents the same function, but +% with respect to a different basis. +% \begin{equation*} +% \begin{CD} +% V_{\wrt{\stdbasis_3}} @>t>T> V_{\wrt{\stdbasis_3}} \\ +% @V{\scriptstyle\identity} VV @V{\scriptstyle\identity} VV \\ +% V_{\wrt{B}} @>t>S> V_{\wrt{B}} +% \end{CD} +% \end{equation*} +% We can easily find the basis~$B$. +% Since $P^{-1}=\rep{\identity}{B,\stdbasis_3}$, its first column is +% $\rep{\identity(\vec{\beta}_1)}{\stdbasis_3}=\rep{\vec{\beta}_1}{\stdbasis_3}$. +% With respect to the standard basis any vector is represented by itself +% so the first basis element $\vec{\beta}_1$ is the first column of $P^{-1}$. +% The same goes for the other two columns. +% \begin{equation*} +% B=\sequence{\colvec[r]{1/2 \\ -1/2 \\ -1/2}, +% \colvec[r]{1/4 \\ 1/4 \\ -3/4}, +% \colvec[r]{1/4 \\ 1/4 \\ 1/4}} +% \end{equation*} +% \end{frame} +% \begin{frame} +% % We know that the transformation~$t$ has eigenvalues of $4$, $8$, and~$12$. +% % For instance $t(\vec{e}_1)=4\vec{e}_1$. +% Now, since each represents the transformation~$t$, the matrices~$T$ and $S$ +% reflect the same action $\vec{e}_1\mapsto4\vec{e}_1$. +% \begin{align*} +% &\rep{t}{\stdbasis_3,\stdbasis_3}\cdot\rep{\vec{e}_1}{\stdbasis_3} +% =T\cdot\rep{\vec{e}_1}{\stdbasis_3} +% =4\cdot\rep{\vec{e}_1}{\stdbasis_3} \\ +% &\rep{t}{B,B}\cdot\rep{\vec{e}_1}{B} +% =S\cdot\rep{\vec{e}_1}{B} +% =4\cdot\rep{\vec{e}_1}{B} +% \end{align*} +% But, while in the two equations the $4$'s are the same, the vectors +% representations are not. +% \begin{align*} +% T\cdot\rep{\vec{e}_1}{\stdbasis_3} +% =T\colvec{1 \\ 0 \\ 0} +% &=4\cdot\colvec{1 \\ 0 \\ 0} \\ +% S\cdot\rep{\vec{e}_1}{B} +% =S\cdot\colvec{1 \\ 0 \\ 2} +% &=4\cdot\colvec{1 \\ 0 \\ 2} +% \end{align*} +% So the two matrices have the same eigenvalues but different eigenvectors. +% \end{frame} + + + \begin{frame}{Characteristic polynomial} diff --git a/slides/four_i.tex b/slides/four_i.tex index 0ebd229cb520575d43913fdc7ce863634dfb4449..1a4f5cd9bbcfe732cea29bf75317da9754f7ea8a 100644 --- a/slides/four_i.tex +++ b/slides/four_i.tex @@ -284,17 +284,19 @@ Thus here is the contrast. \begin{frame}{The determinant is unique} -Recall the process by which we are developing the determinant. -We gave four conditions that any determinant function must -satisfy. -From that definition it is not evident that a function satisfying those -conditions exists. -If such a function exists, from the definition it also -is not immediately evident that -the function is unique; perhaps there are $f_1$ and $f_2$ that give different -outputs for some inputs. -We now settle the second issue. +Recall our definition, that a function is a determinant if +it satisfies four conditions. +This approach does not make evident that +such function is unique. +(An analogy: imagine defining a function +$\map{f}{\N}{\N}$ to be an even-maker' under the condition that its +output is an even constant. +There is such a function, but also there is more than one.) + +We now handle that issue; later we will handle the issue of showing that such +a function exists at all. +\pause \lm[lm:DetFcnIsUnique] \ExecuteMetaData[../det1.tex]{lm:DetFcnIsUnique} @@ -303,10 +305,10 @@ We now settle the second issue. \ExecuteMetaData[../det1.tex]{pf:DetFcnIsUnique} \qed -\medskip -So if there is a function mapping $\matspace_{\nbyn{n}}$ to $\Re$ that -satisfies the four conditions of the definition then there is only one such -function. +% \medskip +% So if there is a function mapping $\matspace_{\nbyn{n}}$ to $\Re$ that +% satisfies the four conditions of the definition then there is only one such +% function. \end{frame} \begin{frame}{More process discussion} We are left with the possibility that such a function does not exist. @@ -331,14 +333,14 @@ such a thing, \pause The rest of this section gives an alternative way to compute -the value of a determinant, a formula. -Because it does not involve Gauss's Method, this formula +the determinant, a formula. +This formula does not involve Gauss's Method and makes plain that the determinant is a function, that it returns well-defined outputs. -As mentioned earlier, using this formula +As mentioned earlier, computing a determinant with this formula is less practical than using the algorithm of Gauss's Method since it is slow. -But it is very valuable for theory. +But it nonetheless is invaluable for the theory. \end{frame} @@ -445,6 +447,11 @@ determinants also break along a plus sign one row at a time. \begin{frame} \ExecuteMetaData[../det1.tex]{pf:DetsMultilinear3} \qed + +\medskip +\noindent (\textit{Remark}. +Some authors use multilinearity to define the determinant in place of our +four conditions that lead to Gauss's Method.) \end{frame} @@ -692,12 +699,6 @@ There are $3\cdot 2\cdot 1=6$ of these. \noindent After bringing out each entry from the original matrix, we are left with matrices that are all $0$'s except for a single~$1$ in each row and column. - -So, the only one thing remains -to be done in our process of justifying the definition -of determinant by finding a way to express -determinants without using Gauss's Method:~give a formula for -the determinant of such matrices (not involving Gauss's Method). \end{frame} @@ -825,8 +826,14 @@ Renaming the matrix entries gives the familiar $\nbyn{2}$ formula. \begin{frame} -The next subsection is optional, -so we give the statements of its results here. +The only thing remaining in our process of finding a formula for +the determinant (not involving Gauss's Method) is to give a formula for +the determinant of such matrices. +We do that in the next subsection. + +\pause +That subsection is optional +so we state its results here. \th[th:DetsExist] \ExecuteMetaData[../det1.tex]{th:DetsExist} @@ -1085,19 +1092,16 @@ So $\sgn(\phi)=+1$. %.......... -\begin{frame}{Determinants exist} -Recall the process by which we are validating the determinant definition. -That definition is given as four conditions and it is not -clear that for each input matrix there is one and only one -associated output, that the determinant -gives a well-defined value. - -Performing Gauss's Method on the input matrix shows that for each -input there is at least one possible output. -But Gauss's Method can be done in more than one way so -to show there is exactly one we want a formula -that gives an -obviously well-defined value. +\begin{frame}{Process finished} +We are in the process of showing that +a function exists that satisfies the four conditions in the definition +of determinant. +We must show that for each input square matrix there is a well-defined +output value~\Dash Gauss's Method can be done in more than one way so +it isn't obvious that by keeping track of signs and multiplying down the +diagonal we always get the same output. +Consequently we have turned to getting an alternate formula +that obviously gives only one output. \pause \ExecuteMetaData[../det1.tex]{DefiningDFunction}