Commit 9593797d authored by Jim Hefferon's avatar Jim Hefferon

computational exercises, including one of similarity

parent 88d60eaa
This diff is collapsed.
...@@ -414,7 +414,88 @@ $1=\deter{I}=\deter{TT^{-1}}=\deter{T}\cdot\deter{T^{-1}}$ ...@@ -414,7 +414,88 @@ $1=\deter{I}=\deter{TT^{-1}}=\deter{T}\cdot\deter{T^{-1}}$
\partsitem $1/9$ \partsitem $1/9$
\end{exparts*} \end{exparts*}
\end{answer} \end{answer}
\recommended \item \recommended \item
Consider the linear transformation of~$\Re^3$
represented with respect to the
standard bases by this matrix.
\begin{equation*}
\begin{mat}
1 &0 &-1 \\
3 &1 &1 \\
-1 &0 &3
\end{mat}
\end{equation*}
\begin{exparts}
\partsitem Compute the determinant of the matrix.
Does the transformation preserve orientation or reverse it?
\partsitem Find the size of the box defined by these vectors.
What is its orientation?
\begin{equation*}
\colvec{1 \\ -1 \\ 2}
\quad
\colvec{2 \\ 0 \\ -1}
\quad
\colvec{1 \\ 1 \\ 0}
\end{equation*}
\partsitem Find the images under $t$ of the vectors in the prior item and
find the size of the box that they define.
What is the orientation?
\end{exparts}
\begin{answer}
\begin{exparts}
\partsitem
Gauss's Method
\begin{equation*}
\grstep[\rho_1+\rho_3]{-3\rho_1+\rho_2}
\begin{mat}
1 &0 &-1 \\
0 &1 &4 \\
0 &0 &2
\end{mat}
\end{equation*}
gives the determinant as~$+2$.
The sign is positive so the transformation preserves orientation.
\partsitem
The size of the box is the value of this determinant.
\begin{equation*}
\begin{vmat}
1 &2 &1 \\
-1 &0 &1 \\
2 &-1 &0
\end{vmat}
=+6
\end{equation*}
The orientation is positive.
\partsitem
Since this transformation is represented by the given matrix with
respect
to the standard bases, and with respect to
the standard basis the vectors represent themselves,
to find the image of the vectors under the transformation just
multiply
them, from the left, by the matrix.
\begin{equation*}
\colvec{1 \\ -1 \\ 2}\mapsto\colvec{-1 \\ 4 \\ 5}
\qquad
\colvec{2 \\ 0 \\ -1}\mapsto\colvec{3 \\ 5 \\ -5}
\qquad
\colvec{1 \\ 1 \\ 0}\mapsto\colvec{1 \\ 4 \\ -1}
\end{equation*}
Then compute the size of the resulting box.
\begin{equation*}
\begin{vmat}
-1 &3 &1 \\
4 &5 &4 \\
5 &-5 &-1
\end{vmat}
=+12
\end{equation*}
The starting box is positively oriented, the transformation
preserves orientations (since the determinant of the matrix is
positive), and the ending box is also positively oriented.
\end{exparts}
\end{answer}
\item
By what factor does each transformation change the size of By what factor does each transformation change the size of
boxes? boxes?
\begin{exparts*} \begin{exparts*}
......
...@@ -358,6 +358,9 @@ the \definend{standard basis}\index{standard basis}\index{basis!standard}% ...@@ -358,6 +358,9 @@ the \definend{standard basis}\index{standard basis}\index{basis!standard}%
\index{standard basis!complex number scalars} \index{standard basis!complex number scalars}
for \( \C^n \) as a vector space over $\C$ for \( \C^n \) as a vector space over $\C$
and again denote it \( \stdbasis_n \). and again denote it \( \stdbasis_n \).
Another example is that
$\polyspace_n$ will be the vector space of degree~$n$ polynomials with
coefficients that are complex.
......
This diff is collapsed.
...@@ -103,11 +103,11 @@ So we have this matrix representation of the map. ...@@ -103,11 +103,11 @@ So we have this matrix representation of the map.
The matrix changing bases from $B$ to $D$ is $\rep{\identity}{B,D}$. The matrix changing bases from $B$ to $D$ is $\rep{\identity}{B,D}$.
We find these by eye We find these by eye
\begin{equation*} \begin{equation*}
\rep{1}{D}=\colvec{1 \\ 0 \\ 0} \rep{\identity(1)}{D}=\colvec{1 \\ 0 \\ 0}
\quad \quad
\rep{x}{D}=\colvec{-1 \\ 1 \\ 0} \rep{\identity(x)}{D}=\colvec{-1 \\ 1 \\ 0}
\quad \quad
\rep{x^2}{D}=\colvec{0 \\ -1 \\ 1} \rep{\identity(x^2)}{D}=\colvec{0 \\ -1 \\ 1}
\end{equation*} \end{equation*}
to get this. to get this.
\begin{equation*} \begin{equation*}
...@@ -144,7 +144,7 @@ To check that, and to underline what the arrow diagram says ...@@ -144,7 +144,7 @@ To check that, and to underline what the arrow diagram says
V_{\wrt{D}} @>t>\hat{T}> V_{\wrt{D}} V_{\wrt{D}} @>t>\hat{T}> V_{\wrt{D}}
\end{CD} \end{CD}
\end{equation*} \end{equation*}
we calculate $T$ directly. we calculate $\hat{T}$ directly.
The effect of the map on the basis elements is The effect of the map on the basis elements is
$d/dx(1)=0$, $d/dx(1+x)=1$, and $d/dx(1+x+x^2)=1+2x$. $d/dx(1)=0$, $d/dx(1+x)=1$, and $d/dx(1+x+x^2)=1+2x$.
Representing of those with respect to $D$ Representing of those with respect to $D$
...@@ -155,7 +155,7 @@ Representing of those with respect to $D$ ...@@ -155,7 +155,7 @@ Representing of those with respect to $D$
\quad \quad
\rep{1+2x}{D}=\colvec{-1 \\ 2 \\ 0} \rep{1+2x}{D}=\colvec{-1 \\ 2 \\ 0}
\end{equation*} \end{equation*}
gives the same matrix $\hat{T}=\rep{d/dx}{D,D}$ as we found above. gives the same matrix $\hat{T}=\rep{d/dx}{D,D}$ as above.
\end{frame} \end{frame}
\begin{frame} \begin{frame}
The definition doesn't require that we consider the underlying maps. The definition doesn't require that we consider the underlying maps.
...@@ -455,115 +455,6 @@ Not every vector is simply rescaled. ...@@ -455,115 +455,6 @@ Not every vector is simply rescaled.
\begin{frame}
Matrices that are similar have the same eigenvalues, but
needn't have the same eigenvectors.
\ex
These two are similar
\begin{equation*}
T=
\begin{mat}
4 &0 &0 \\
0 &8 &0 \\
0 &0 &12
\end{mat}
\qquad
S=
\begin{mat}[r]
6 &-1 &-1 \\
2 &11 &-1 \\
-6 &-5 &7
\end{mat}
\end{equation*}
since $S=PTP^{-1}$ for this $P$.
\begin{equation*}
P=
\begin{mat}[r]
1 &-1 &0 \\
0 &1 &-1 \\
2 &1 &1
\end{mat}
\qquad
P^{-1}=
\begin{mat}[r]
1/2 &1/4 &1/4 \\
-1/2 &1/4 &1/4 \\
-1/2 &-3/4 &1/4
\end{mat}
\end{equation*}
\end{frame}
\begin{frame}
\noindent Suppose that $\map{t}{\C^3}{\C^3}$ is
represented by $T$ with respect to the standard basis.
Then this is the action of $t$.
\begin{equation*}
\colvec{x \\ y \\ z}\mapsunder{t}\colvec{4x \\ 8y \\ 12z}
\end{equation*}
\pause
By eye we see that three
eigenvalues of~$t$ are $\lambda_1=4$, $\lambda_2=8$, and~$\lambda_3=12$.
For instance this holds.
\begin{equation*}
T\cdot\colvec{1 \\ 0 \\ 0}
=\begin{mat}
4 &0 &0 \\
0 &8 &0 \\
0 &0 &12
\end{mat}\colvec{1 \\ 0 \\ 0}
=4\cdot\colvec{1 \\ 0 \\ 0}
\end{equation*}
\end{frame}
\begin{frame}
Contrast that with $S=PTP^{-1}$, which represents the same function, but
with respect to a different basis.
\begin{equation*}
\begin{CD}
V_{\wrt{\stdbasis_3}} @>t>T> V_{\wrt{\stdbasis_3}} \\
@V{\scriptstyle\identity} VV @V{\scriptstyle\identity} VV \\
V_{\wrt{B}} @>t>S> V_{\wrt{B}}
\end{CD}
\end{equation*}
We can easily find the basis~$B$.
Since $P^{-1}=\rep{\identity}{B,\stdbasis_3}$, its first column is
$\rep{\identity(\vec{\beta}_1)}{\stdbasis_3}=\rep{\vec{\beta}_1}{\stdbasis_3}$.
With respect to the standard basis any vector is represented by itself
so the first basis element $\vec{\beta}_1$ is the first column of $P^{-1}$.
The same goes for the other two columns.
\begin{equation*}
B=\sequence{\colvec[r]{1/2 \\ -1/2 \\ -1/2},
\colvec[r]{1/4 \\ 1/4 \\ -3/4},
\colvec[r]{1/4 \\ 1/4 \\ 1/4}}
\end{equation*}
\end{frame}
\begin{frame}
% We know that the transformation~$t$ has eigenvalues of $4$, $8$, and~$12$.
% For instance $t(\vec{e}_1)=4\vec{e}_1$.
Now, since each represents the transformation~$t$, the matrices~$T$ and $S$
reflect the same action $\vec{e}_1\mapsto4\vec{e}_1$.
\begin{align*}
&\rep{t}{\stdbasis_3,\stdbasis_3}\cdot\rep{\vec{e}_1}{\stdbasis_3}
=T\cdot\rep{\vec{e}_1}{\stdbasis_3}
=4\cdot\rep{\vec{e}_1}{\stdbasis_3} \\
&\rep{t}{B,B}\cdot\rep{\vec{e}_1}{B}
=S\cdot\rep{\vec{e}_1}{B}
=4\cdot\rep{\vec{e}_1}{B}
\end{align*}
But, while in the two equations the $4$'s are the same, the vectors
representations are not.
\begin{align*}
T\cdot\rep{\vec{e}_1}{\stdbasis_3}
=T\colvec{1 \\ 0 \\ 0}
&=4\cdot\colvec{1 \\ 0 \\ 0} \\
S\cdot\rep{\vec{e}_1}{B}
=S\cdot\colvec{1 \\ 0 \\ 2}
&=4\cdot\colvec{1 \\ 0 \\ 2}
\end{align*}
So the two matrices have the same eigenvalues but different eigenvectors.
\end{frame}
\begin{frame}{Computing eigenvalues and eigenvectors} \begin{frame}{Computing eigenvalues and eigenvectors}
\ex \ex
...@@ -791,6 +682,121 @@ These are for $\lambda_2=2$. ...@@ -791,6 +682,121 @@ These are for $\lambda_2=2$.
\end{frame} \end{frame}
\begin{frame}
Matrices that are similar have the same eigenvalues, but
needn't have the same eigenvectors.
\ex
These two are similar
\begin{equation*}
T=
\begin{mat}
4 &0 &0 \\
0 &8 &0 \\
0 &0 &12
\end{mat}
\qquad
S=
\begin{mat}[r]
6 &-1 &-1 \\
2 &11 &-1 \\
-6 &-5 &7
\end{mat}
\end{equation*}
since $S=PTP^{-1}$ for this $P$.
\begin{equation*}
P=
\begin{mat}[r]
1 &-1 &0 \\
0 &1 &-1 \\
2 &1 &1
\end{mat}
\qquad
P^{-1}=
\begin{mat}[r]
1/2 &1/4 &1/4 \\
-1/2 &1/4 &1/4 \\
-1/2 &-3/4 &1/4
\end{mat}
\end{equation*}
For the first matrix
\begin{equation*}
\colvec{1 \\ 0 \\ 0}
\end{equation*}
is an eigenvector associated with the eigenvalue~$4$ but
that does not hold for the second matrix.
\end{frame}
% \begin{frame}
% \noindent Suppose that $\map{t}{\C^3}{\C^3}$ is
% represented by $T$ with respect to the standard basis.
% Then this is the action of $t$.
% \begin{equation*}
% \colvec{x \\ y \\ z}\mapsunder{t}\colvec{4x \\ 8y \\ 12z}
% \end{equation*}
% \pause
% By eye we see that three
% eigenvalues of~$t$ are $\lambda_1=4$, $\lambda_2=8$, and~$\lambda_3=12$.
% For instance this holds.
% \begin{equation*}
% T\cdot\colvec{1 \\ 0 \\ 0}
% =\begin{mat}
% 4 &0 &0 \\
% 0 &8 &0 \\
% 0 &0 &12
% \end{mat}\colvec{1 \\ 0 \\ 0}
% =4\cdot\colvec{1 \\ 0 \\ 0}
% \end{equation*}
% \end{frame}
% \begin{frame}
% Contrast that with $S=PTP^{-1}$, which represents the same function, but
% with respect to a different basis.
% \begin{equation*}
% \begin{CD}
% V_{\wrt{\stdbasis_3}} @>t>T> V_{\wrt{\stdbasis_3}} \\
% @V{\scriptstyle\identity} VV @V{\scriptstyle\identity} VV \\
% V_{\wrt{B}} @>t>S> V_{\wrt{B}}
% \end{CD}
% \end{equation*}
% We can easily find the basis~$B$.
% Since $P^{-1}=\rep{\identity}{B,\stdbasis_3}$, its first column is
% $\rep{\identity(\vec{\beta}_1)}{\stdbasis_3}=\rep{\vec{\beta}_1}{\stdbasis_3}$.
% With respect to the standard basis any vector is represented by itself
% so the first basis element $\vec{\beta}_1$ is the first column of $P^{-1}$.
% The same goes for the other two columns.
% \begin{equation*}
% B=\sequence{\colvec[r]{1/2 \\ -1/2 \\ -1/2},
% \colvec[r]{1/4 \\ 1/4 \\ -3/4},
% \colvec[r]{1/4 \\ 1/4 \\ 1/4}}
% \end{equation*}
% \end{frame}
% \begin{frame}
% % We know that the transformation~$t$ has eigenvalues of $4$, $8$, and~$12$.
% % For instance $t(\vec{e}_1)=4\vec{e}_1$.
% Now, since each represents the transformation~$t$, the matrices~$T$ and $S$
% reflect the same action $\vec{e}_1\mapsto4\vec{e}_1$.
% \begin{align*}
% &\rep{t}{\stdbasis_3,\stdbasis_3}\cdot\rep{\vec{e}_1}{\stdbasis_3}
% =T\cdot\rep{\vec{e}_1}{\stdbasis_3}
% =4\cdot\rep{\vec{e}_1}{\stdbasis_3} \\
% &\rep{t}{B,B}\cdot\rep{\vec{e}_1}{B}
% =S\cdot\rep{\vec{e}_1}{B}
% =4\cdot\rep{\vec{e}_1}{B}
% \end{align*}
% But, while in the two equations the $4$'s are the same, the vectors
% representations are not.
% \begin{align*}
% T\cdot\rep{\vec{e}_1}{\stdbasis_3}
% =T\colvec{1 \\ 0 \\ 0}
% &=4\cdot\colvec{1 \\ 0 \\ 0} \\
% S\cdot\rep{\vec{e}_1}{B}
% =S\cdot\colvec{1 \\ 0 \\ 2}
% &=4\cdot\colvec{1 \\ 0 \\ 2}
% \end{align*}
% So the two matrices have the same eigenvalues but different eigenvectors.
% \end{frame}
\begin{frame}{Characteristic polynomial} \begin{frame}{Characteristic polynomial}
......
...@@ -284,17 +284,19 @@ Thus here is the contrast. ...@@ -284,17 +284,19 @@ Thus here is the contrast.
\begin{frame}{The determinant is unique} \begin{frame}{The determinant is unique}
Recall the process by which we are developing the determinant. Recall our definition, that a function is a determinant if
We gave four conditions that any determinant function must it satisfies four conditions.
satisfy. This approach does not make evident that
From that definition it is not evident that a function satisfying those such function is unique.
conditions exists. (An analogy: imagine defining a function
If such a function exists, from the definition it also $\map{f}{\N}{\N}$ to be an `even-maker' under the condition that its
is not immediately evident that output is an even constant.
the function is unique; perhaps there are $f_1$ and $f_2$ that give different There is such a function, but also there is more than one.)
outputs for some inputs.
We now settle the second issue. We now handle that issue; later we will handle the issue of showing that such
a function exists at all.
\pause
\lm[lm:DetFcnIsUnique] \lm[lm:DetFcnIsUnique]
\ExecuteMetaData[../det1.tex]{lm:DetFcnIsUnique} \ExecuteMetaData[../det1.tex]{lm:DetFcnIsUnique}
...@@ -303,10 +305,10 @@ We now settle the second issue. ...@@ -303,10 +305,10 @@ We now settle the second issue.
\ExecuteMetaData[../det1.tex]{pf:DetFcnIsUnique} \ExecuteMetaData[../det1.tex]{pf:DetFcnIsUnique}
\qed \qed
\medskip % \medskip
So if there is a function mapping $\matspace_{\nbyn{n}}$ to $\Re$ that % So if there is a function mapping $\matspace_{\nbyn{n}}$ to $\Re$ that
satisfies the four conditions of the definition then there is only one such % satisfies the four conditions of the definition then there is only one such
function. % function.
\end{frame} \end{frame}
\begin{frame}{More process discussion} \begin{frame}{More process discussion}
We are left with the possibility that such a function does not exist. We are left with the possibility that such a function does not exist.
...@@ -331,14 +333,14 @@ such a thing, ...@@ -331,14 +333,14 @@ such a thing,
\pause \pause
The rest of this section gives an alternative way to compute The rest of this section gives an alternative way to compute
the value of a determinant, a formula. the determinant, a formula.
Because it does not involve Gauss's Method, this formula This formula does not involve Gauss's Method and
makes plain that the determinant is a function, makes plain that the determinant is a function,
that it returns well-defined outputs. that it returns well-defined outputs.
As mentioned earlier, using this formula As mentioned earlier, computing a determinant with this formula
is less practical than using the algorithm of Gauss's Method since it is less practical than using the algorithm of Gauss's Method since it
is slow. is slow.
But it is very valuable for theory. But it nonetheless is invaluable for the theory.
\end{frame} \end{frame}
...@@ -445,6 +447,11 @@ determinants also break along a plus sign one row at a time. ...@@ -445,6 +447,11 @@ determinants also break along a plus sign one row at a time.
\begin{frame} \begin{frame}
\ExecuteMetaData[../det1.tex]{pf:DetsMultilinear3} \ExecuteMetaData[../det1.tex]{pf:DetsMultilinear3}
\qed \qed
\medskip
\noindent (\textit{Remark}.
Some authors use multilinearity to define the determinant in place of our
four conditions that lead to Gauss's Method.)
\end{frame} \end{frame}
...@@ -692,12 +699,6 @@ There are $3\cdot 2\cdot 1=6$ of these. ...@@ -692,12 +699,6 @@ There are $3\cdot 2\cdot 1=6$ of these.
\noindent \noindent
After bringing out each entry from the original matrix, we are left with After bringing out each entry from the original matrix, we are left with
matrices that are all $0$'s except for a single~$1$ in each row and column. matrices that are all $0$'s except for a single~$1$ in each row and column.
So, the only one thing remains
to be done in our process of justifying the definition
of determinant by finding a way to express
determinants without using Gauss's Method:~give a formula for
the determinant of such matrices (not involving Gauss's Method).
\end{frame} \end{frame}
...@@ -825,8 +826,14 @@ Renaming the matrix entries gives the familiar $\nbyn{2}$ formula. ...@@ -825,8 +826,14 @@ Renaming the matrix entries gives the familiar $\nbyn{2}$ formula.
\begin{frame} \begin{frame}
The next subsection is optional, The only thing remaining in our process of finding a formula for
so we give the statements of its results here. the determinant (not involving Gauss's Method) is to give a formula for
the determinant of such matrices.
We do that in the next subsection.
\pause
That subsection is optional
so we state its results here.
\th[th:DetsExist] \th[th:DetsExist]
\ExecuteMetaData[../det1.tex]{th:DetsExist} \ExecuteMetaData[../det1.tex]{th:DetsExist}
...@@ -1085,19 +1092,16 @@ So $\sgn(\phi)=+1$. ...@@ -1085,19 +1092,16 @@ So $\sgn(\phi)=+1$.
%.......... %..........
\begin{frame}{Determinants exist} \begin{frame}{Process finished}
Recall the process by which we are validating the determinant definition. We are in the process of showing that
That definition is given as four conditions and it is not a function exists that satisfies the four conditions in the definition
clear that for each input matrix there is one and only one of determinant.
associated output, that the determinant We must show that for each input square matrix there is a well-defined
gives a well-defined value. output value~\Dash Gauss's Method can be done in more than one way so
it isn't obvious that by keeping track of signs and multiplying down the
Performing Gauss's Method on the input matrix shows that for each diagonal we always get the same output.
input there is at least one possible output. Consequently we have turned to getting an alternate formula
But Gauss's Method can be done in more than one way so that obviously gives only one output.
to show there is exactly one we want a formula
that gives an
obviously well-defined value.
\pause \pause
\ExecuteMetaData[../det1.tex]{DefiningDFunction} \ExecuteMetaData[../det1.tex]{DefiningDFunction}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment