Commit f9d22cce by Jim Hefferon

parent 35d2ab72
No preview for this file type
 ... ... @@ -225,10 +225,10 @@ Instead, for homomorphisms we have a weaker but still very useful result. %<*th:HomoDetActOnBasis> A homomorphism is determined by its action on a basis:~if $V$ is a vector space with basis $$\sequence{\vec{\beta}_1,\dots,\vec{\beta}_n}$$ and $W$ is a vector space with elements $$\vec{w}_1,\dots,\vec{w}_n$$ (perhaps not distinct elements) then $$\sequence{\vec{\beta}_1,\dots,\vec{\beta}_n}$$, if $W$ is a vector space, and if $$\vec{w}_1,\dots,\vec{w}_n\in W$$ (these codomain elements need not be distinct) then there exists a homomorphism from $$V$$ to $$W$$ sending each $$\vec{\beta}_i$$ to $$\vec{w}_i$$, and that homomorphism is unique. % ... ... @@ -262,13 +262,14 @@ is the calculation. %<*pf:HomoDetActOnBasis2> This map is unique because if $$\map{\hat{h}}{V}{W}$$ is another homomorphism satisfying that $$h^\prime(\vec{\beta}_i)=\vec{w}_i$$ is another homomorphism satisfying that $$\hat{h}(\vec{\beta}_i)=\vec{w}_i$$ for each $$i$$ then $$h$$ and $$h^\prime$$ agree on all of the vectors in the domain. then $$h$$ and $$\hat{h}$$ have the same effect on all of the vectors in the domain. \begin{multline*} h^\prime(\vec{v}) =h^\prime(c_1\vec{\beta}_1+\dots+c_n\vec{\beta}_n) =c_1 h^\prime(\vec{\beta}_1)+\dots+c_n h^\prime(\vec{\beta}_n) \\ \hat{h}(\vec{v}) =\hat{h}(c_1\vec{\beta}_1+\dots+c_n\vec{\beta}_n) =c_1 \hat{h}(\vec{\beta}_1)+\dots+c_n \hat{h}(\vec{\beta}_n) \\ =c_1\vec{w}_1+\dots+c_n\vec{w}_n =h(\vec{v}) \end{multline*} ... ...
No preview for this file type
 ... ... @@ -161,6 +161,7 @@ I thank Gabriel S Santiago for the cover colors. I am also grateful to Saint Michael's College for supporting this project over many years. And, I thank my wife Lynne for her unflagging encouragement. ... ...
 ... ... @@ -236,13 +236,13 @@ equations true. \pause Gauss' Method makes the inconsistency clear. \begin{equation*} \grstep[-2\rho_1+\rho_3]{-\rho_1+\rho_2}\; \grstep[-2\rho_1+\rho_3]{-\rho_1+\rho_2} \begin{linsys}{3} x &+ &y &+ &z &= &6 \\ & &y & & &= &2 \\ & &y & & &= &1 \end{linsys} \;\grstep{-\rho_2+\rho_3}\; \grstep{-\rho_2+\rho_3} \begin{linsys}{3} x &+ &y &+ &z &= &6 \\ & &y & & &= &2 \\ ... ... @@ -262,7 +262,7 @@ This system has infinitely many solutions. x &- &y &+ &z &= &4 \\ x &+ &y &- &2z &= &-1 \end{linsys} \;\grstep{-\rho_1+\rho_2}\; \grstep{-\rho_1+\rho_2} \begin{linsys}{3} x &- &y &+ &z &= &4 \\ & &2y &- &3z &= &-5 ... ... @@ -316,8 +316,8 @@ We've seen that this system has infinitely many solutions. x & & &+ &z &= &3 \\ 3x &- &y &+ &7z &= &15 \end{linsys} \;\grstep[3\rho_1+\rho_3]{-\rho_1+\rho_2} \;\grstep{-4\rho_2+\rho_3}\; \grstep[3\rho_1+\rho_3]{-\rho_1+\rho_2} \grstep{-4\rho_2+\rho_3} \begin{linsys}{3} -x &- &y &+ &3z &= &3 \\ & &-y &+ &4z &= &6 \\ ... ... @@ -617,7 +617,7 @@ The same Gauss's Method steps reduce it to echelon form. 1 &2 &-1 &0 &0 \\ 2 &-1 &-2 &1 &0 \end{amat} \;\grstep{-2\rho_1+\rho_2}\; \grstep{-2\rho_1+\rho_2} \begin{amat}[r]{4} 1 &2 &-1 &0 &0 \\ 0 &-5 &0 &1 &0 ... ...
 ... ... @@ -134,14 +134,14 @@ With this system \end{equation*} we can rewrite in matrix notation and do Gauss's Method. \begin{equation*} \grstep{-1\rho_1+\rho_2}\, \begin{equation*}\hspace*{-2em} \grstep{-1\rho_1+\rho_2} \begin{amat}{4} 1 &-1 &0 &-2 &2 \\ 0 &2 &3 &3 &1 \\ 0 &-1 &1 &-1 &0 \end{amat} \,\grstep{(1/2)\rho_2+\rho_3}\, \grstep{(1/2)\rho_2+\rho_3} \begin{amat}{4} 1 &-1 &0 &-2 &2 \\ 0 &2 &3 &3 &1 \\ ... ... @@ -150,7 +150,7 @@ and do Gauss's Method. \end{equation*} We can combine the operations turning the leading entries to $1$. \begin{equation*} \grstep[(2/5)\rho_3]{(1/2)\rho_2}\, \grstep[(2/5)\rho_3]{(1/2)\rho_2} \begin{amat}{4} 1 &-1 &0 &-2 &2 \\ 0 &1 &3/2 &3/2 &-1/2 \\ ... ... @@ -158,14 +158,14 @@ We can combine the operations turning the leading entries to $1$. \end{amat} \end{equation*} Now eliminate upwards. \begin{equation*} \grstep{-(3/2)\rho_3+\rho_2}\, \begin{equation*}\hspace*{-2em} \grstep{-(3/2)\rho_3+\rho_2} \begin{amat}{4} 1 &-1 &0 &-2 &2 \\ 0 &1 &0 &6/5 &-1/5 \\ 0 &0 &1 &1/5 &-1/5 \end{amat} \,\grstep{\rho_2+\rho_1}\, \grstep{\rho_2+\rho_1} \begin{amat}{4} 1 &0 &0 &-4/5 &9/5 \\ 0 &1 &0 &6/5 &-1/5 \\ ... ... @@ -265,17 +265,17 @@ directly gives the parametrized description of the solution set. 1 &3 &5 \\ 2 &4 &8 \end{amat} \,\grstep{-2\rho_1+\rho_2}\, \grstep{-2\rho_1+\rho_2} \begin{amat}{2} 1 &3 &5 \\ 0 &-2 &-2 \end{amat} \,\grstep{-(1/2)\rho_2}\, \grstep{-(1/2)\rho_2} \begin{amat}{2} 1 &3 &5 \\ 0 &1 &1 \end{amat} \\ \grstep{-3\rho_2+\rho_1}\, \grstep{-3\rho_2+\rho_1} \begin{amat}{2} 1 &0 &2 \\ 0 &1 &1 ... ... @@ -376,13 +376,13 @@ the third row is the sum of the first and second. \pause But after Gauss's Method \begin{equation*} \grstep[-3\rho_1+\rho_3]{-2\rho_1+\rho_3}\, \grstep[-3\rho_1+\rho_3]{-2\rho_1+\rho_3} \begin{mat}[r] 1 &-1 &3 \\ 0 &2 &-2 \\ 0 &2 &-2 \end{mat} \,\grstep{-\rho_2+\rho_3}\, \grstep{-\rho_2+\rho_3} \begin{mat}[r] 1 &-1 &3 \\ 0 &2 &-2 \\ ... ... @@ -452,10 +452,10 @@ see if they are equal. \pause The results are \begin{equation*} \grstep[-(4/3)\rho_1+\rho_3]{-(1/3)\rho_1+\rho_2}\; \;\grstep{-1\rho_2+\rho_3}\; \;\grstep[-(3/5)\rho_2]{(1/3)\rho_1}\; \;\grstep{-(2/3)\rho_2+\rho_1}\; \grstep[-(4/3)\rho_1+\rho_3]{-(1/3)\rho_1+\rho_2} \grstep{-1\rho_2+\rho_3} \grstep[-(3/5)\rho_2]{(1/3)\rho_1} \grstep{-(2/3)\rho_2+\rho_1} \begin{mat}[r] 1 &0 &4/5 \\ 0 &1 &-6/5 \\ ... ... @@ -464,10 +464,10 @@ The results are \end{equation*} and \begin{equation*} \grstep[-(1/3)\rho_1+\rho_3]{-2\rho_1+\rho_2}\; \;\grstep{\rho_2\leftrightarrow\rho_3}\; \;\grstep[-3\rho_2]{(1/3)\rho_1}\; \;\grstep{-(1/3)\rho_2+\rho_1}\; \grstep[-(1/3)\rho_1+\rho_3]{-2\rho_1+\rho_2} \grstep{\rho_2\leftrightarrow\rho_3} \grstep[-3\rho_2]{(1/3)\rho_1} \grstep{-(1/3)\rho_2+\rho_1} \begin{mat}[r] 1 &0 &2 \\ 0 &1 &-8 \\ ... ...
 ... ... @@ -60,39 +60,42 @@ \begin{frame} \ex We have the intuition that the vector spaces $\Re^2$ and $\polyspace_1$ are the same,'' for instance in that $\Re^2$ and $\polyspace_1$ are the same,'' in that they are two-component spaces. For instance \begin{align*} &\colvec{1 \\ 2} \quad\text{is just like}\quad 1+2x \\ \text{and} &\colvec{-3 \\ 1/2} 1+2x, \\ \text{and}\quad &\colvec{-3 \\ 1/2} \quad\text{is just like}\quad -3-(1/2)x -3-(1/2)x, \end{align*} etc. What makes the spaces just like'' each other is that this association holds through the operations of addition What makes the spaces alike, not just the sets, is that the association persists through the operations: this \begin{multline*} \colvec{1 \\ 2}+\colvec{-3 \\ 1/2}=\colvec{-2 \\ 5/2} \\ \text{is just like}\quad (1+2x)+(-3+(1/2)x)=-2+(5/2)x \end{multline*} and scalar multiplication. illustrates addition and this \begin{equation*} 3\colvec{1 \\ 2}=\colvec{3 \\ 6} \quad\text{is just like}\quad 3(1+2x)=3+6x \end{equation*} shows scalar multiplication. \end{frame}\begin{frame} More formally, we can associate each two-tall vector with a linear polynomial. we can think that each two-tall vector corresponds with a linear polynomial. \begin{equation*} \colvec{a \\ b} \quad\longleftrightarrow\quad a+bx \end{equation*} \pause Note that this association holds through the vector space operations of This association holds through the vector space operations of addition \begin{multline*} \colvec{a_1 \\ b_1}+\colvec{a_2 \\ b_2}=\colvec{a_1+a_2 \\ b_1+b_2} \\ ... ... @@ -134,7 +137,7 @@ For instance, these are corresponding elements. \colvec[r]{1 \\ -1 \\ 2 \\ -2} \end{equation*} \pause With the association defined, note that it holds up under addition. This association persists under addition. \begin{multline*} \begin{mat} a_1 &b_1 \\ ... ... @@ -183,7 +186,7 @@ Here is an example of that with particular elements. \colvec{1 \\ 3 \\ 5 \\ -5} \end{multline*} \pause The association also holds under scalar multiplication. The association persists also under scalar multiplication. \begin{equation*} r\cdot\begin{mat} a &b \\ ... ... @@ -240,7 +243,7 @@ $\Re^3$ under this map. Here are two examples of the action of $f$. \begin{equation*} f(1+2x+3x^2)=\colvec{1 \\ 2 \\ 3} \quad\text{and}\quad \qquad f(3+4x^2)=\colvec{3 \\ 0 \\ 4} \end{equation*} To verify that $f$ is an isomorphism we must check condition~(1), ... ... @@ -250,9 +253,9 @@ that $f$ is a correspondence, and condition~(2), that $f$ preserves structure. The first part of~(1) is that $f$ is one-to-one. We usually verify one-to-oneness by assuming that the function yields the same output on two inputs, and then show that the two inputs must therefore be equal. So assume that $f(a_0+a_1x+a_2x^2)=f(b_0+b_1x+b_2x^2)$. By definition of $f$ we have on two inputs $f(a_0+a_1x+a_2x^2)=f(b_0+b_1x+b_2x^2)$ and then show that the two inputs must therefore be equal. The definition of $f$ gives \begin{equation*} \colvec{a_0 \\ a_1 \\ a_2} = ... ... @@ -260,19 +263,18 @@ By definition of $f$ we have \end{equation*} and two column vectors are equal only if their entries are equal $a_0=b_0$, $a_1=b_1$, and~$a_2=b_2$. Thus the starting inputs are equal $a_0+a_1x+a_2x^2=b_0+b_1x+b_2x^2$ and so $f$ is one-to-one. Thus the original inputs are equal $a_0+a_1x+a_2x^2=b_0+b_1x+b_2x^2$, and so $f$ is one-to-one. \pause The second part of~(1) is that $f$ is onto. We usually verify ontoness by considering an element of the codomain and producing an element of the domain that maps to it. So consider this member of $\Re^3$. \begin{equation*} \colvec{u \\ v \\ w} \vec{v}=\colvec{v_0 \\ v_1 \\ v_2}\in\Re^3 \end{equation*} Observe that it is the image under $f$ of the member $u+vx+wx^2$ of the domain. and producing an element of the domain that maps to it. Observe that $\vec{v}$ is the image under $f$ of the member $v_0+v_1x+v_2x^2$ of the domain. Thus $f$ is onto. \end{frame} \begin{frame} ... ... @@ -283,11 +285,11 @@ Consider $f$ acting on the sum of two elements of the domain. f(\,(a_0+a_1x+a_2x^2)+(b_0+b_1x+b_2x^2)\,) \\ =f(\,(a_0+b_0)+(a_1+b_1)x+(a_2+b_2)x^2\,) \end{multline*} By definition of $f$ we have this. The definition of $f$ gives \begin{equation*} =\colvec{a_0+b_0 \\ a_1+b_1 \\ a_2+b_2} \end{equation*} Of course, and that equals \begin{equation*} =\colvec{a_0 \\ a_1 \\ a_2} + ... ... @@ -308,7 +310,11 @@ This is similar to the check for addition. &=\colvec{ra_0 \\ ra_1 \\ ra_2} \\ &=f(\,(ra_0)+(ra_1)x+(ra_2)x^2\,) \end{align*} \qed So the function~$f$ is an isomorphism. Because there is an isomorphism, the two spaces are isomorphic $\polyspace_2\isomorphicto \Re^3$. % \qed \end{frame} ... ... @@ -316,11 +322,11 @@ This is similar to the check for addition. %.......... \begin{frame}{Special case: Automorphisms} \df[df:Automorphism] \df[df:Automorphism]\hspace*{-1em} \ExecuteMetaData[../map1.tex]{df:Automorphism} \pause \ex[exam:RigidPlaneMapsAutos] \ex[exam:RigidPlaneMapsAutos]\hspace*{-1em} \ExecuteMetaData[../map1.tex]{ex:RigidPlaneMapsAutos0} \centergraphic{../ch3.14} ... ... @@ -405,7 +411,7 @@ By the definition of~$f$ we have that $t_1=t_2$ and so the two members of $L$ are equal. Next we check that $f$ is onto. Consider this member of the codomain: $r\in\Re$. Consider a member of the codomain, $r\in\Re$. There is a member of the domain that maps to it, namely this member of $L$. \begin{equation*} f(\,r\cdot\colvec{1 \\ 2}\,) ... ... @@ -413,13 +419,13 @@ There is a member of the domain that maps to it, namely this member of $L$. % Thus $f$ is onto. To finish we check that $f$ preserves structure with the lemma's~(2). \begin{equation*} \begin{multline*} f(\,t_1\cdot\colvec{1 \\ 2}+t_2\cdot\colvec{1 \\ 2}\,) =f(\,(t_1+t_2)\cdot\colvec{1 \\ 2}\,) =t_1+t_2 =f(\,(t_1+t_2)\cdot\colvec{1 \\ 2}\,) \\ =t_1+t_2 = f(\,t_1\cdot\colvec{1 \\ 2}\,)+f(\,t_2\cdot\colvec{1 \\ 2}\,) \end{equation*} \end{multline*} \end{frame} ... ... @@ -446,10 +452,12 @@ To finish we check that $f$ preserves structure with the lemma's~(2). \ex We saw earlier that this planar line through the origin (under the natural addition and scalar multiplication operations) \begin{equation*} L=\set{t\cdot\colvec{1 \\ 2} \suchthat t\in\Re} \end{equation*} (under the natural operations) is isomorphic to $\Re^1$ is isomorphic to $\Re^1$ via this function. \begin{equation*} f(\,\colvec{t \\ 2t}\,) ... ... @@ -560,8 +568,9 @@ every vector $\vec{v}$ has a unique representation with respect to that basis. %.......... \begin{frame} \ex The plane $2x-y+z=0$ through the origin in $\Re^3$ is a vector space. Considering that a one-equation linear system The plane $2x-y+z=0$ through the origin in $\Re^3$ is a vector space (under the natural addition and scalar multiplication operations). Considering that equation to be a one-equation linear system and parametrizing with the free variables \begin{equation*} P=\set{\colvec{x \\ y \\ z} ... ...
 ... ... @@ -132,7 +132,7 @@ and scalar multiplication. In contrast, $g$ does not respect addition. \begin{equation*} g(\colvec{1 \\ 4}+\colvec{5 \\ 6})=-17 \quad\text{while}\quad \qquad g(\colvec{1 \\ 4})+g(\colvec{5 \\ 6})=-16 \end{equation*} \end{frame} ... ... @@ -144,18 +144,20 @@ In contrast, $g$ does not respect addition. \begin{frame} We proved the following two in the context of studying isomorphisms. \lm[le:HomoSendsZeroToZero] \lm[le:HomoSendsZeroToZero]\hspace*{-1em} \ExecuteMetaData[../map2.tex]{lm:HomoSendsZeroToZero} \lm[le:HomoPreserveLinCombo] \lm[le:HomoPreserveLinCombo]\hspace*{-1em} \ExecuteMetaData[../map2.tex]{lm:HomoPreserveLinCombo} \pause \medskip Clause~(2) is often convenient for verifying that a map is a homomorphism. \pause \ex Between any two vector spaces the zero map $\map{Z}{V}{W}$, defined by Between any two vector spaces the zero map $\map{Z}{V}{W}$ given by $Z(\vec{v})=\zero_W$ is a homomorphism. The check is: The check using~(2) is: $Z(c_1\vec{v}_1+c_2\vec{v}_2)=\zero_W =\zero_W+\zero_W=c_1Z(\vec{v}_1)+c_2Z(\vec{v}_2)$. \end{frame} ... ... @@ -194,8 +196,8 @@ this function $\map{h}{\polyspace_1}{\polyspace_1}$. \begin{equation*} h(a+bx)=b+bx \end{equation*} Here are two examples of the action of this function: $h(1+2x)=2+2x$ and $h(3-x)=-1-x$. Here are two examples of the action of this function: $1+2x\mapsto 2+2x$ and $3-x\mapsto-1-x$. \pause This function is linear. ... ... @@ -220,8 +222,9 @@ The derivative map $\map{d/dx}{\polyspace_2}{\polyspace_1}$ is given by $d/dx\,(ax^2+bx+c)=2ax+b$. For instance, $d/dx\,(3x^2-2x+4)=6x-2$ and $d/dx\,(x^2+1)=2x$. \pause It is a homomorphism. This map is a homomorphism. \begin{multline*} d/dx\,\big(\,r_1(a_1x^2+b_1x+c_1)+r_2(a_2x^2+b_2x+c_2)\,\big) \hspace*{5em} \\ \begin{aligned} ... ... @@ -249,7 +252,7 @@ is this. a &b \\ c &d \end{mat}) =a+b =a+d \end{equation*} It is linear. \begin{multline*} ... ... @@ -355,7 +358,7 @@ Consider the standard basis $\stdbasis_2$ for the vector space $\Re^2$. We can specify a rotation of the two basis vectors as here. \begin{equation*} \colvec{1 \\ 0}\mapsto\colvec[r]{\cos\theta \\ \sin\theta} \quad \qquad \colvec{0 \\ 1}\mapsto\colvec[r]{-\sin\theta \\ \cos\theta} \qquad \vcenteredhbox{\includegraphics{asy/three_ii_rotate.pdf}} ... ... @@ -413,13 +416,13 @@ transformation. %.......... \begin{frame} \lm[le:SpLinFcns] \lm[le:SpLinFcns]\hspace*{-1em} \ExecuteMetaData[../map2.tex]{lm:SpLinFcns} \ExecuteMetaData[../map2.tex]{SpLinFcns} \pause \pf \pf\hspace*{-1em} \ExecuteMetaData[../map2.tex]{pf:SpLinFcns} \qed \end{frame} ... ... @@ -434,19 +437,22 @@ Fix these bases for the two spaces. \colvec{0 \\ 1}} \end{equation*} A linear map is determined by its action on a basis of the domain space. Thus the elements of $\linmaps{\Re}{\Re^2}$ Thus the functions that are elements of $\linmaps{\Re}{\Re^2}$ \begin{equation*} 1\mapsunder{t} c_1\colvec{1 \\ 0}+c_2\colvec{0 \\ 1} \end{equation*} are determined by $c_1, c_2$; we could write each such map as $t=t_{c_1,c_2}$. are determined by $c_1$ and $c_2$. We could write each such map as $t=t_{c_1,c_2}$. So $\linmaps{\Re}{\Re^2}$ is a dimension~$2$ space. \pause \ex Similarly, for $\linmaps{\Re^2}{\Re^3}$, fix $\stdbasis_2=\sequence{\vec{e}_1, \vec{e}_2}$ and $\stdbasis_3=\sequence{\vec{e}_1, \vec{e}_2, \vec{e}_3}$ (note that one $\vec{e}_1$ is two-tall while the other is three-tall). We can characterize the elements in this way. (note that the $\vec{e}_1$'s are different; for instance, one is two-tall while the other is three-tall). We can characterize the elements of $\linmaps{\Re^2}{\Re^3}$ in this way. \begin{equation*} \vec{e}_1\mapsunder{t} c_1\vec{e}_1+c_2\vec{e}_2+c_3\vec{e}_3 \qquad ... ... @@ -502,7 +508,7 @@ because given a vector $\vec{w}\in\Re^2$ \vec{w}=\colvec{a \\ b} \end{equation*} we can find a $\vec{v}\in\Re^3$ that maps to it, specifically any vector with $\vec{v}\in\Re^3$ that maps to it, specifically any $\vec{v}$ with a first component~$a$ and second component~$b$. Thus the rank of $\pi$ is~$2$. \end{frame} ... ... @@ -512,7 +518,7 @@ The derivative map $\map{d/dx}{\Re^4}{\Re^4}$ is linear. Its range is $\rangespace{d/dx}=\set{a_0+a_1x+a_2x^2+a_3x^3\suchthat a_i\in\Re}$. (Verifying that every member of that space is the derivative of a fourth (Verifying that every member of that space is the derivative of some fourth degree polynomial is easy.) The rank of the derivative function is~$3$. ... ... @@ -551,16 +557,18 @@ The rank of this map is $1$. %.......... \begin{frame}{Homomorphisms organize the domain} When we moved from studying isomorphisms to studying homomorphisms we dropped the requirements that the maps be onto and one-to-one. We've seen that dropping the onto condition has no effect in the sense In moving from isomorphisms to homomorphisms we dropped the requirement that the maps be onto and one-to-one. Dropping the onto condition has no effect in the sense that any homomorphism $\map{h}{V}{W}$ is onto some space, namely its range space $\rangespace{h}$. \pause Consider the effect of dropping the one-to-one condition. Now, for some vector $\vec{w}\in W$ in the range there may be many vectors $\vec{v}\in V$ mapped to $\vec{w}$. Now, for some vectors $\vec{w}\in W$ in the range there may be many vectors $\vec{v}\in V$ mapped to $\vec{w}$. (Below, to avoid a confused picture there is only one $\mapsto$' arrow for many domain elements.) \centergraphic{../ch3.5} \ExecuteMetaData[../map2.tex]{InverseImage} \end{frame} ... ... @@ -573,12 +581,12 @@ The projection map $\map{\pi}{\Re^2}{\Re}$ is linear. \end{equation*} \pause We can identify the codomain $\Re$ with the $x$-axis in $\Re^2$. Here is a member of the $x$-axis, drawn in red. Here is a member of the $x$-axis, outlined in red. \centergraphic{asy/three_ii_proj1.pdf} \pause Next are some elements of $\pi^{-1}(2)$, shown both as dots (as in the bean diagram) and as vectors. These are also shown in red because they are associated by $\pi$ with $2$. The vectors are also red because they are associated by $\pi$ with $2$. \centergraphic{asy/three_ii_proj2.pdf} Think of these as $2$~vectors.'' \end{frame}\begin{frame} ... ... @@ -687,7 +695,7 @@ equals a `$\vec{w}_1+\vec{w}_2$~vector.'' \begin{frame}{Null space} Vector spaces have a distinguished element, $\vec{0}$. So we next consider the inverse image of that element $h^{-1}(\zero)$. \lm[le:NullspIsSubSp] \lm[le:NullspIsSubSp]\hspace*{-1em} \ExecuteMetaData[../map2.tex]{lm:NullspIsSubSp} \pause ... ... @@ -699,7 +707,7 @@ So we next consider the inverse image of that element $h^{-1}(\zero)$. \medskip \no This result complements \nearbylemma{le:RangeIsSubSp} that for any subspace of the domain, its image is a subspace of the range. that for any subspace of the domain its image is a subspace of the range. \end{frame} ... ...
 ... ... @@ -279,7 +279,7 @@ gives a system of equations. 2r_1 &+ &3r_2 &+ &r_3 & & &+ &r_5 &= &0 \\ 2r_1 &+ &3r_2 &+ &4r_3 &- &r_4 &- &r_5 &= &0 \end{linsys} \\ \grstep{-\rho_1+\rho_2}\; \grstep{-\rho_1+\rho_2} \begin{linsys}{5} 2r_1 &+ &3r_2 &+ &r_3 & & &+ &r_5 &= &0 \\ & & &+ &3r_3 &- &r_4 &- &2r_5 &= &0 ... ...
 ... ... @@ -31,7 +31,7 @@ \title[Basis and Dimension] % (optional, use only with long paper titles) {Two.III Basis and Dimension} \author[Jim Hefferon]{\textit{Linear Algebra} \and {\small Jim Hef{}feron}} \author[Jim Hefferon]{\textit{Linear Algebra} \\ {\small Jim Hef{}feron}} \institute{ \texttt{http://joshua.smcvt.edu/linearalgebra} } ... ... @@ -452,8 +452,8 @@ Solving the system -1 &2 &-1 &2 &0 \\ 1 &3 &-1 &4 &0 \end{amat} \;\grstep[\rho_1+\rho_3]{\rho_1+\rho_2}\; \grstep{-2\rho_2+\rho_3}\; \grstep[\rho_1+\rho_3]{\rho_1+\rho_2} \grstep{-2\rho_2+\rho_3} \begin{amat}[r]{4} 1 &-1 &1 &0 &0 \\ 0 &1 &0 &2 &0 \\ ... ... @@ -580,8 +580,8 @@ spaces. -1 &-2 &2 &2 &0 \\ 2 &4 &5 &2 &9 \end{mat} \;\grstep[-2\rho_1+\rho_3]{\rho_1+\rho_2} \;\grstep{-\rho_2+\rho_3}\; \grstep[-2\rho_1+\rho_3]{\rho_1+\rho_2} \grstep{-\rho_2+\rho_3} \begin{mat}[r] 1 &2 &1 &0 &3 \\ 0 &0 &3 &2 &3 \\ ... ... @@ -670,7 +670,7 @@ reduce, 2 &-1 \\ 3 &1/2 \end{mat} \;\grstep{(-3/2)\rho_1+\rho_2} \grstep{(-3/2)\rho_1+\rho_2} \begin{mat} 2 &-1 \\ 0 &2 ... ... @@ -753,10 +753,10 @@ The column rank of this matrix is $3$. Its largest set of linearly independent columns is size~$3$ because that's the size of its largest set of linearly independent rows. \begin{equation*} \begin{equation*}\hspace*{-2em} \grstep[-2\rho_1+\rho_3 \\ -(1/2)\rho_1+\rho_4]{-(3/2)\rho_1+\rho_2} \;\grstep{-(1/3)\rho_2+\rho_4} \;\grstep{\rho_3\leftrightarrow\rho_4} \grstep{-(1/3)\rho_2+\rho_4} \grstep{\rho_3\leftrightarrow\rho_4} \begin{mat} 2 &-1 &3 &1 &0 &1 \\ 0 &3/2 &-7/2 &-1/2 &4 &-5/2 \\ ... ...
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!