det1.tex 149 KB
 Jim Hefferon committed Dec 05, 2011 1 % Chapter 4, Section 1 _Linear Algebra_ Jim Hefferon  Jim Hefferon committed Nov 02, 2013 2 % http://joshua.smcvt.edu/linearalgebra  Jim Hefferon committed Dec 05, 2011 3 4 % 2001-Jun-12 \chapter{Determinants}  Jim Hefferon committed Nov 02, 2013 5 6 In the first chapter we highlighted the special case of linear systems  Jim Hefferon committed Dec 05, 2011 7 8 with the same number of equations as unknowns, those of the form $$T\vec{x}=\vec{b}$$ where $T$ is a square matrix.  Jim Hefferon committed Nov 08, 2013 9 We noted that there are only two kinds of $T$'s.  Jim Hefferon committed Jan 12, 2012 10 If $T$ is associated with a unique solution  Jim Hefferon committed Nov 08, 2013 11 for any $\vec{b}$,  Jim Hefferon committed Jan 10, 2012 12 such as for the homogeneous system $T\vec{x}=\zero$, then  Jim Hefferon committed Nov 08, 2013 13 $T$ is associated with a unique solution for every such $\vec{b}$.  Jim Hefferon committed Nov 02, 2013 14 We call such a matrix nonsingular.  Jim Hefferon committed Dec 05, 2011 15 16 The other kind of $T$, where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions,  Jim Hefferon committed Nov 08, 2013 17 we call singular.  Jim Hefferon committed Dec 05, 2011 18   Jim Hefferon committed Nov 08, 2013 19 In our work since then this distinction has been a theme.  Jim Hefferon committed Jan 10, 2012 20 For instance, we now know that an  Jim Hefferon committed Jun 08, 2012 21 $$\nbyn{n}$$ matrix $$T$$ is nonsingular if and only if  Jim Hefferon committed Jan 12, 2012 22 each of these holds:  Jim Hefferon committed Jun 08, 2012 23 %<*EquivalentOfNonsingular>  Jim Hefferon committed Dec 05, 2011 24 \begin{itemize}  Jim Hefferon committed Jan 11, 2012 25  \item any system $$T\vec{x}=\vec{b}$$ has a solution  Jim Hefferon committed Dec 05, 2011 26 27 28  and that solution is unique; \item Gauss-Jordan reduction of $T$ yields an identity matrix; \item the rows of $T$ form a linearly independent set;  Jim Hefferon committed Jan 10, 2012 29  \item the columns of $$T$$ form a linearly independent set,  Jim Hefferon committed Nov 08, 2013 30  a basis for $$\Re^n$$;  Jim Hefferon committed Dec 05, 2011 31 32 33  \item any map that $$T$$ represents is an isomorphism; \item an inverse matrix $$T^{-1}$$ exists. \end{itemize}  Jim Hefferon committed Jun 08, 2012 34 %  Jim Hefferon committed Nov 02, 2013 35 So when we look at a square matrix, one of the first things  Jim Hefferon committed Jan 10, 2012 36 37 that we ask is whether it is nonsingular.  Jim Hefferon committed Dec 05, 2011 38   Jim Hefferon committed Nov 08, 2013 39 This chapter develops a formula that determines whether $T$ is nonsingular.  Jim Hefferon committed Nov 02, 2013 40 41 More precisely, we will develop a formula for $\nbyn{1}$~matrices, one for $\nbyn{2}$~matrices, etc.  Jim Hefferon committed Nov 08, 2013 42 These are naturally related; that is,  Jim Hefferon committed Jan 12, 2012 43 we will develop a family of  Jim Hefferon committed Dec 05, 2011 44 45 formulas, a scheme that describes the formula for each size.  Jim Hefferon committed Jan 10, 2012 46 47 Since we will restrict the discussion to square matrices, in this chapter we will often simply say matrix' in place of square matrix'.  Jim Hefferon committed Dec 05, 2011 48 49 50 51 52 53  \section{Def{}inition}  Jim Hefferon committed Jun 08, 2012 54 %<*DeterminantIntro>  Jim Hefferon committed Jan 12, 2012 55 56 Determining nonsingularity is trivial for $$\nbyn{1}$$ matrices.  Jim Hefferon committed Jan 11, 2012 57 58 59 60 61 62 \begin{equation*} \begin{mat} a \end{mat} \quad\text{is nonsingular iff}\quad a \neq 0  Jim Hefferon committed Jun 08, 2012 63 \end{equation*}  Jim Hefferon committed Nov 08, 2013 64 Corollary~Three.IV.\ref{cor:TwoByTwoInv} gives the $\nbyn{2}$ formula.  Jim Hefferon committed Jan 11, 2012 65 66 \begin{equation*} \begin{mat}  Jim Hefferon committed Dec 05, 2011 67 68  a &b \\ c &d  Jim Hefferon committed Jan 11, 2012 69 70 71 72  \end{mat} \quad\text{is nonsingular iff}\quad ad-bc \neq 0 \end{equation*}  Jim Hefferon committed Jan 12, 2012 73 74 We can produce the $\nbyn{3}$ formula as we did the prior one, although the computation is intricate  Jim Hefferon committed Feb 01, 2012 75 76 % \typeout{DET1 ThreeByThreeDetForm cleveref expmt!} % (see \cref{exer:ThreeByThreeDetForm}).  Jim Hefferon committed Jan 12, 2012 77 (see \nearbyexercise{exer:ThreeByThreeDetForm}).  Jim Hefferon committed Jan 10, 2012 78 79 \begin{equation*} \begin{mat}  Jim Hefferon committed Dec 05, 2011 80 81 82  a &b &c \\ d &e &f \\ g &h &i  Jim Hefferon committed Jan 10, 2012 83 84 85 86  \end{mat} \quad\text{is nonsingular iff}\quad aei+bfg+cdh-hfa-idb-gec \neq 0 \end{equation*}  Jim Hefferon committed Dec 05, 2011 87 With these cases in mind, we posit a family of  Jim Hefferon committed Jan 11, 2012 88 formulas: $a$, $ad-bc$, etc.  Jim Hefferon committed Oct 20, 2012 89 For each $n$ the formula defines a  Jim Hefferon committed Dec 05, 2011 90 91 92 93 94 \definend{determinant}\index{determinant}\index{matrix!determinant} function $\map{\det_{\nbyn{n}}}{\matspace_{\nbyn{n}}}{\Re}$ such that an $\nbyn{n}$ matrix $T$ is nonsingular if and only if $\det_{\nbyn{n}}(T)\neq 0$.  Jim Hefferon committed Jun 08, 2012 95 %  Jim Hefferon committed Jan 11, 2012 96 (We usually omit the subscript $$\nbyn{n}$$ because  Jim Hefferon committed Nov 08, 2013 97 the size of $$T$$ describes which determinant function we mean.)  Jim Hefferon committed Dec 05, 2011 98 99 100 101 102 103 104  \subsectionoptional{Exploration}  Jim Hefferon committed Nov 02, 2013 105 106 \textit{This subsection is an optional motivation and development of the general definition.  Jim Hefferon committed Nov 08, 2013 107 The definition is in the next subsection.}  Jim Hefferon committed Dec 05, 2011 108   Jim Hefferon committed Jan 11, 2012 109 110 Above, in each case the matrix is nonsingular if and only if some formula is nonzero.  Jim Hefferon committed Nov 08, 2013 111 112 But the three formulas don't show an obvious pattern.  Jim Hefferon committed Dec 05, 2011 113 114 115 We may spot that the $$\nbyn{1}$$ term $$a$$ has one letter, that the $$\nbyn{2}$$ terms $$ad$$ and $$bc$$ have two letters, and that the $$\nbyn{3}$$  Jim Hefferon committed Nov 02, 2013 116 117 terms each have three letters. We may even spot that in those terms  Jim Hefferon committed Dec 05, 2011 118 there is a letter from each row and column of the matrix, e.g.,  Jim Hefferon committed Jan 11, 2012 119 120 in the $$cdh$$ term one letter comes from each row and from each column.  Jim Hefferon committed Dec 05, 2011 121 \begin{equation*}  Jim Hefferon committed Jan 10, 2012 122  \begin{mat}  Jim Hefferon committed Dec 05, 2011 123 124 125  & &c \\ d \\ &h  Jim Hefferon committed Jan 10, 2012 126  \end{mat}  Jim Hefferon committed Dec 05, 2011 127 \end{equation*}  Jim Hefferon committed Jan 11, 2012 128 But these observations are perhaps more puzzling than  Jim Hefferon committed Dec 05, 2011 129 130 enlightening. For instance, we might wonder why  Jim Hefferon committed Nov 02, 2013 131 some terms are added but some are subtracted.  Jim Hefferon committed Dec 05, 2011 132   Jim Hefferon committed Nov 08, 2013 133 134 135 136 137 A good strategy for solving problems is to explore which properties the solution must have, and then search for something with those properties. So we shall start by asking what properties we'd like the determinant formulas to have.  Jim Hefferon committed Dec 05, 2011 138   Jim Hefferon committed Jan 11, 2012 139 At this point, our  Jim Hefferon committed Nov 08, 2013 140 main way to decide whether a matrix is singular or not  Jim Hefferon committed Jan 11, 2012 141 is to do Gaussian  Jim Hefferon committed Dec 05, 2011 142 reduction and then check whether  Jim Hefferon committed Nov 08, 2013 143 144 the diagonal of the echelon form matrix has any zeroes, that is, whether the product down the diagonal is~zero.  Jim Hefferon committed Nov 02, 2013 145 So we could guess that whatever determinant formula we find, the proof that  146 it is right may involve applying Gauss's Method to the matrix  Jim Hefferon committed Dec 05, 2011 147 to show that in the end the product down the diagonal is zero if and only if  Jim Hefferon committed Jan 12, 2012 148 our formula gives zero.  Jim Hefferon committed Jan 11, 2012 149 150  This suggests a plan:~we will look for a family of determinant  Jim Hefferon committed Jan 13, 2012 151 152 formulas that are unaffected by row operations and such that the determinant of an  Jim Hefferon committed Dec 05, 2011 153 echelon form matrix is the product of its diagonal entries.  Jim Hefferon committed Jan 11, 2012 154 155 156 157 158 159 160 161 % Under this plan, a proof that the functions determine singularity would go, % Where $T\rightarrow\cdots\rightarrow\hat{T}$ is the Gaussian % reduction, the determinant of $T$ equals the % determinant of $\hat{T}$ (because the determinant is unchanged by row % operations), which is the product down the diagonal, which is % zero if and only if the matrix is singular''. In the rest of this subsection we will test this plan against the $\nbyn{2}$ and $\nbyn{3}$ formulas.  Jim Hefferon committed Jan 12, 2012 162 In the end we will have to modify the unaffected by row operations''  Jim Hefferon committed Dec 05, 2011 163 164 part, but not by much.  Jim Hefferon committed Nov 02, 2013 165 First we check whether  Jim Hefferon committed Dec 05, 2011 166 167 168 169 170 171 172 173 174 the $\nbyn{2}$ and $\nbyn{3}$ formulas are unaffected by the row operation of combining:~if \begin{equation*} T \grstep{k\rho_i+\rho_j} \hat{T} \end{equation*} then is $$\det(\hat{T})=\det(T)$$? This check of the $\nbyn{2}$ determinant after the $k\rho_1+\rho_2$ operation \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 175  \begin{mat}  Jim Hefferon committed Dec 05, 2011 176 177  a &b \\ ka+c &kb+d \\  Jim Hefferon committed Jan 10, 2012 178  \end{mat}  Jim Hefferon committed Dec 05, 2011 179 180 181 182 183  ) = a(kb+d)-(ka+c)b = ad-bc \end{equation*} shows that it is indeed unchanged, and the other $\nbyn{2}$ combination $k\rho_2+\rho_1$ gives the same result.  Jim Hefferon committed Nov 02, 2013 184 185 Likewise, the $\nbyn{3}$ combination $k\rho_3+\rho_2$ leaves the determinant unchanged  Jim Hefferon committed Dec 05, 2011 186 187  \begin{align*} \det(  Jim Hefferon committed Jan 10, 2012 188  \begin{mat}  Jim Hefferon committed Dec 05, 2011 189 190 191  a &b &c \\ kg+d &kh+e &ki+f \\ g &h &i  Jim Hefferon committed Jan 10, 2012 192  \end{mat}  Jim Hefferon committed Dec 05, 2011 193 194 195 196 197 198 199 200 201 202  ) &=\begin{array}[t]{@{}l@{}} a(kh+e)i+b(ki+f)g+c(kg+d)h \\ \ \hbox{}-h(ki+f)a-i(kg+d)b-g(kh+e)c \end{array} \\ &=aei + bfg + cdh - hfa - idb - gec \end{align*} as do the other $\nbyn{3}$ row combination operations. So there seems to be promise in the plan.  Jim Hefferon committed Jan 11, 2012 203 204 205 206 Of course, perhaps if we had worked out the $\nbyn{4}$ determinant formula and tested it then we might have found that it is affected by row combinations. This is an exploration and we do not yet have all the facts.  Jim Hefferon committed Dec 05, 2011 207 208 Nonetheless, so far, so good.  Jim Hefferon committed Nov 02, 2013 209 Next we compare $$\det(\hat{T})$$ with  Jim Hefferon committed Jan 11, 2012 210 211 212 213 $$\det(T)$$ for row swaps. % \begin{equation*} % T \grstep{ {\rho}_i \leftrightarrow {\rho}_j } \hat{T} % \end{equation*}  Jim Hefferon committed Nov 02, 2013 214 Here we hit a snag:  Jim Hefferon committed Jan 11, 2012 215 the $$\nbyn{2}$$ row swap $\rho_1\leftrightarrow\rho_2$  Jim Hefferon committed Jan 12, 2012 216 does not yield $$ad-bc$$.  Jim Hefferon committed Dec 05, 2011 217 218  \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 219  \begin{mat}  Jim Hefferon committed Dec 05, 2011 220 221  c &d \\ a &b  Jim Hefferon committed Jan 10, 2012 222  \end{mat}  Jim Hefferon committed Dec 05, 2011 223  )  Jim Hefferon committed Nov 08, 2013 224  = bc - ad  Jim Hefferon committed Dec 05, 2011 225  \end{equation*}  Jim Hefferon committed Jan 11, 2012 226 And this $\rho_1\leftrightarrow\rho_3$ swap inside of a $$\nbyn{3}$$ matrix  Jim Hefferon committed Dec 05, 2011 227 228 \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 229  \begin{mat}  Jim Hefferon committed Dec 05, 2011 230 231 232  g &h &i \\ d &e &f \\ a &b &c  Jim Hefferon committed Jan 10, 2012 233  \end{mat}  Jim Hefferon committed Dec 05, 2011 234 235 236  ) = gec + hfa + idb - bfg - cdh - aei \end{equation*}  Jim Hefferon committed Nov 02, 2013 237 also does not give the same determinant as before the swap since again  Jim Hefferon committed Dec 05, 2011 238 239 240 241 there is a sign change. Trying a different $$\nbyn{3}$$ swap $\rho_1\leftrightarrow\rho_2$ \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 242  \begin{mat}  Jim Hefferon committed Dec 05, 2011 243 244 245  d &e &f \\ a &b &c \\ g &h &i  Jim Hefferon committed Jan 10, 2012 246  \end{mat}  Jim Hefferon committed Dec 05, 2011 247 248 249 250 251  ) = dbi + ecg + fah - hcd - iae - gbf \end{equation*} also gives a change of sign.  Jim Hefferon committed Nov 02, 2013 252 253 So row swaps appear in this experiment to change the sign of a determinant.  Jim Hefferon committed Jan 11, 2012 254 This does not wreck our plan entirely.  Jim Hefferon committed Nov 02, 2013 255 We hope to decide nonsingularity by considering  Jim Hefferon committed Jan 12, 2012 256 257 only whether the formula gives zero, not by considering its sign. Therefore, instead of expecting determinant formulas to be  Jim Hefferon committed Nov 08, 2013 258 259 entirely unaffected by row operations we modify our plan so that on a swap they will change sign.  Jim Hefferon committed Dec 05, 2011 260   Jim Hefferon committed Oct 20, 2012 261 Obviously  Jim Hefferon committed Nov 02, 2013 262 263 we finish by comparing $$\det(\hat{T})$$ with $$\det(T)$$ for the operation  Jim Hefferon committed Jan 11, 2012 264 265 266 % \begin{equation*} % T \grstep{ k{\rho}_i } \hat{T} % \end{equation*}  Jim Hefferon committed Nov 08, 2013 267 of multiplying a row by a scalar.  Jim Hefferon committed Jan 11, 2012 268 This  Jim Hefferon committed Dec 05, 2011 269 270 \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 271  \begin{mat}  Jim Hefferon committed Dec 05, 2011 272 273  a &b \\ kc &kd  Jim Hefferon committed Jan 10, 2012 274  \end{mat}  Jim Hefferon committed Dec 05, 2011 275 276 277 278  ) = a(kd) - (kc)b =k\cdot (ad-bc) \end{equation*}  Jim Hefferon committed Nov 08, 2013 279 ends with the entire determinant multiplied by~$k$,  Jim Hefferon committed Jan 11, 2012 280 281 and the other $\nbyn{2}$ case has the same result. This $$\nbyn{3}$$ case ends the same way  Jim Hefferon committed Dec 05, 2011 282 283 \begin{align*} \det(  Jim Hefferon committed Jan 10, 2012 284  \begin{mat}  Jim Hefferon committed Dec 05, 2011 285 286 287  a &b &c \\ d &e &f \\ kg &kh &ki  Jim Hefferon committed Jan 10, 2012 288  \end{mat}  Jim Hefferon committed Dec 05, 2011 289 290 291 292 293 294 295  ) &= \begin{array}[t]{@{}l@{}} ae(ki) + bf(kg) + cd(kh) \\ \>- (kh)fa - (ki)db - (kg)ec \end{array} \\ &= k\cdot(aei + bfg + cdh - hfa - idb - gec) \end{align*}  Jim Hefferon committed Nov 08, 2013 296 as do the other two $\nbyn{3}$ cases.  Jim Hefferon committed Jan 12, 2012 297 These make us suspect that multiplying a row by~$k$  Jim Hefferon committed Nov 08, 2013 298 multiplies the determinant by~$k$.  Jim Hefferon committed Jan 11, 2012 299 300 As before, this modifies our plan but does not wreck it. We are asking only that the  Jim Hefferon committed Nov 02, 2013 301 zero-ness of the determinant formula be unchanged, not focusing on the  Jim Hefferon committed Jan 11, 2012 302 its sign or magnitude.  Jim Hefferon committed Dec 05, 2011 303   Jim Hefferon committed Jun 10, 2016 304 So in this exploration our plan  Jim Hefferon committed Nov 02, 2013 305 306 got modified in some inessential ways and is now:~we will look for $\nbyn{n}$ determinant  Jim Hefferon committed Oct 20, 2012 307 functions that remain unchanged  Jim Hefferon committed Dec 05, 2011 308 under the operation of row combination, that change sign on  Jim Hefferon committed Oct 20, 2012 309 310 311 312 a row swap, that rescale on the rescaling of a row, and such that the determinant of an echelon form matrix is the product down the diagonal. In the next two subsections we will see that for each~$n$  Jim Hefferon committed Nov 02, 2013 313 there is one and only one such function.  Jim Hefferon committed Dec 05, 2011 314   Jim Hefferon committed Nov 02, 2013 315 316 Finally, for the next subsection note that factoring out scalars is a row-wise operation:  Jim Hefferon committed Jan 11, 2012 317 here  Jim Hefferon committed Dec 05, 2011 318 319 \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 320  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 321 322  3 &3 &9 \\ 2 &1 &1 \\  Jim Hefferon committed Nov 08, 2013 323  5 &11 &-5  Jim Hefferon committed Jan 10, 2012 324  \end{mat}  Jim Hefferon committed Dec 05, 2011 325 326  ) =3 \cdot \det(  Jim Hefferon committed Jan 10, 2012 327  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 328 329  1 &1 &3 \\ 2 &1 &1 \\  Jim Hefferon committed Nov 08, 2013 330  5 &11 &-5  Jim Hefferon committed Jan 10, 2012 331  \end{mat}  Jim Hefferon committed Dec 05, 2011 332 333  ) \end{equation*}  Jim Hefferon committed Jan 11, 2012 334 the $3$ comes only out of the top row only, leaving the other rows unchanged.  Jim Hefferon committed Nov 02, 2013 335 Consequently in the definition of determinant we will  Jim Hefferon committed Jan 11, 2012 336 write it as a function of the rows  Jim Hefferon committed Nov 02, 2013 337 $$\det (\vec{\rho}_1,\vec{\rho}_2,\dots\vec{\rho}_n)$$, rather than as  Jim Hefferon committed Jan 11, 2012 338 339 $$\det(T)$$ or as a function of the entries $$\det(t_{1,1},\dots,t_{n,n})$$.  Jim Hefferon committed Dec 05, 2011 340 341 342 343 344 345  \begin{exercises} \recommended \item Evaluate the determinant of each. \begin{exparts*} \partsitem $$ Jim Hefferon committed Jan 10, 2012 346  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 347 348  3 &1 \\ -1 &1  Jim Hefferon committed Jan 10, 2012 349  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 350  \partsitem $$ Jim Hefferon committed Jan 10, 2012 351  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 352 353 354  2 &0 &1 \\ 3 &1 &1 \\ -1 &0 &1  Jim Hefferon committed Jan 10, 2012 355  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 356  \partsitem $$ Jim Hefferon committed Jan 10, 2012 357  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 358 359 360  4 &0 &1 \\ 0 &0 &1 \\ 1 &3 &-1  Jim Hefferon committed Jan 10, 2012 361  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 362 363 364 365 366 367 368 369 370 371 372  \end{exparts*} \begin{answer} \begin{exparts*} \partsitem $$4$$ \partsitem $$3$$ \partsitem $$-12$$ \end{exparts*} \end{answer} \item Evaluate the determinant of each. \begin{exparts*}  Jim Hefferon committed Jan 10, 2012 373  \partsitem $$\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 374 375  2 &0 \\ -1 &3  Jim Hefferon committed Jan 10, 2012 376 377  \end{mat}$$ \partsitem $$\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 378 379 380  2 &1 &1 \\ 0 &5 &-2 \\ 1 &-3 &4  Jim Hefferon committed Jan 10, 2012 381 382  \end{mat}$$ \partsitem $$\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 383 384 385  2 &3 &4 \\ 5 &6 &7 \\ 8 &9 &1  Jim Hefferon committed Jan 10, 2012 386  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 387 388 389 390 391 392 393 394 395 396 397 398 399  \end{exparts*} \begin{answer} \begin{exparts*} \partsitem $$6$$ \partsitem $$21$$ \partsitem $$27$$ \end{exparts*} \end{answer} \recommended \item Verify that the determinant of an upper-triangular $\nbyn{3}$ matrix is the product down the diagonal. \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 400  \begin{mat}  Jim Hefferon committed Dec 05, 2011 401 402 403  a &b &c \\ 0 &e &f \\ 0 &0 &i  Jim Hefferon committed Jan 10, 2012 404  \end{mat}  Jim Hefferon committed Dec 05, 2011 405 406 407 408 409 410 411 412 413 414 415 416 417 418  ) =aei \end{equation*} Do lower-triangular matrices work the same way? \begin{answer} For the first, apply the formula in this section, note that any term with a $$d$$, $$g$$, or $$h$$ is zero, and simplify. Lower-triangular matrices work the same way. \end{answer} \recommended \item Use the determinant to decide if each is singular or nonsingular. \begin{exparts*} \partsitem $$ Jim Hefferon committed Jan 10, 2012 419  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 420 421  2 &1 \\ 3 &1  Jim Hefferon committed Jan 10, 2012 422  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 423  \partsitem $$ Jim Hefferon committed Jan 10, 2012 424  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 425 426  0 &1 \\ 1 &-1  Jim Hefferon committed Jan 10, 2012 427  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 428  \partsitem $$ Jim Hefferon committed Jan 10, 2012 429  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 430 431  4 &2 \\ 2 &1  Jim Hefferon committed Jan 10, 2012 432  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 433 434 435 436 437 438 439 440 441 442 443 444 445  \end{exparts*} \begin{answer} \begin{exparts} \partsitem Nonsingular, the determinant is $$-1$$. \partsitem Nonsingular, the determinant is $$-1$$. \partsitem Singular, the determinant is $$0$$. \end{exparts} \end{answer} \item Singular or nonsingular? Use the determinant to decide. \begin{exparts*} \partsitem $$ Jim Hefferon committed Jan 10, 2012 446  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 447 448 449  2 &1 &1 \\ 3 &2 &2 \\ 0 &1 &4  Jim Hefferon committed Jan 10, 2012 450  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 451  \partsitem $$ Jim Hefferon committed Jan 10, 2012 452  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 453 454 455  1 &0 &1 \\ 2 &1 &1 \\ 4 &1 &3  Jim Hefferon committed Jan 10, 2012 456  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 457  \partsitem $$ Jim Hefferon committed Jan 10, 2012 458  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 459 460 461  2 &1 &0 \\ 3 &-2 &0 \\ 1 &0 &0  Jim Hefferon committed Jan 10, 2012 462  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 463 464 465 466 467 468 469 470 471 472 473 474 475  \end{exparts*} \begin{answer} \begin{exparts} \partsitem Nonsingular, the determinant is $$3$$. \partsitem Singular, the determinant is $$0$$. \partsitem Singular, the determinant is $$0$$. \end{exparts} \end{answer} \recommended \item Each pair of matrices differ by one row operation. Use this operation to compare $$\det(A)$$ with $$\det(B)$$. \begin{exparts}  Jim Hefferon committed Jan 10, 2012 476  \partsitem $$A=\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 477 478  1 &2 \\ 2 &3  Jim Hefferon committed Jun 10, 2016 479  \end{mat}$$,  Jim Hefferon committed Jan 10, 2012 480  $$B=\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 481 482  1 &2 \\ 0 &-1  Jim Hefferon committed Jan 10, 2012 483 484  \end{mat}$$ \partsitem $$A=\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 485 486 487  3 &1 &0 \\ 0 &0 &1 \\ 0 &1 &2  Jim Hefferon committed Jun 10, 2016 488  \end{mat}$$,  Jim Hefferon committed Jan 10, 2012 489  $$B=\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 490 491 492  3 &1 &0 \\ 0 &1 &2 \\ 0 &0 &1  Jim Hefferon committed Jan 10, 2012 493 494  \end{mat}$$ \partsitem $$A=\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 495 496 497  1 &-1 &3 \\ 2 &2 &-6 \\ 1 &0 &4  Jim Hefferon committed Jun 10, 2016 498  \end{mat}$$,  Jim Hefferon committed Jan 10, 2012 499  $$B=\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 500 501 502  1 &-1 &3 \\ 1 &1 &-3 \\ 1 &0 &4  Jim Hefferon committed Jan 10, 2012 503  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 504 505 506 507 508 509 510 511 512  \end{exparts} \begin{answer} \begin{exparts} \partsitem $$\det(B)=\det(A)$$ via $$-2\rho_1+\rho_2$$ \partsitem $$\det(B)=-\det(A)$$ via $$\rho_2\leftrightarrow\rho_3$$ \partsitem $$\det(B)=(1/2)\cdot \det(A)$$ via $$(1/2)\rho_2$$ \end{exparts} \end{answer}  Jim Hefferon committed Jun 10, 2016 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575  \recommended \item Find the determinant of this $\nbyn{4}$ matrix by following the plan: perform Gauss's Method and look for the determinant to remain unchanged on a row combination, to change sign on a row swap, to rescale on the rescaling of a row, and such that the determinant of the echelon form matrix is the product down its diagonal. \begin{equation*} \begin{mat} 1 &2 &0 &2 \\ 2 &4 &1 &0 \\ 0 &0 &-1 &3 \\ 3 &-1 &1 &4 \end{mat} \end{equation*} \begin{answer} Gauss's Method does this. % sage: m = matrix(QQ, [[1,2,0,2], [2,4,1,0], [0,0,-1,3], [3,-1,1,4]]) % sage: gauss_method(m) % [ 1 2 0 2] % [ 2 4 1 0] % [ 0 0 -1 3] % [ 3 -1 1 4] % take -2 times row 1 plus row 2 % take -3 times row 1 plus row 4 % [ 1 2 0 2] % [ 0 0 1 -4] % [ 0 0 -1 3] % [ 0 -7 1 -2] % swap row 2 with row 4 % [ 1 2 0 2] % [ 0 -7 1 -2] % [ 0 0 -1 3] % [ 0 0 1 -4] % take 1 times row 3 plus row 4 % [ 1 2 0 2] % [ 0 -7 1 -2] % [ 0 0 -1 3] % [ 0 0 0 -1] \begin{equation*} \begin{mat} 1 &2 &0 &2 \\ 2 &4 &1 &0 \\ 0 &0 &-1 &3 \\ 3 &-1 &1 &4 \end{mat} \grstep[-3\rho_1+\rho_4]{-2\rho_1+\rho_2} \grstep{\rho_2\leftrightarrow\rho_4} \grstep{\rho_3+\rho_4} \begin{mat} 1 &2 &0 &2 \\ 0 &-7 &1 &-2 \\ 0 &0 &-1 &3 \\ 0 &0 &0 &-1 \end{mat} \end{equation*} The echelon form matrix has a product down the diagonal of $1\cdot(-7)\cdot(-1)\cdot(-1)=-7$. In the course of Gauss's Method no rows got rescaled but there was a row swap, so to get the determinant we change the sign, giving $+7$. \end{answer}  Jim Hefferon committed Dec 05, 2011 576 577 578 579  \item Show this. \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 580  \begin{mat}  Jim Hefferon committed Dec 05, 2011 581 582 583  1 &1 &1 \\ a &b &c \\ a^2 &b^2 &c^2  Jim Hefferon committed Jan 10, 2012 584  \end{mat}  Jim Hefferon committed Dec 05, 2011 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608  ) =(b-a)(c-a)(c-b) \end{equation*} \begin{answer} Using the formula for the determinant of a $\nbyn{3}$ matrix we expand the left side \begin{equation*} 1\cdot b\cdot c^2+1\cdot c\cdot a^2+1\cdot a\cdot b^2 -b^2\cdot c\cdot 1 -c^2\cdot a\cdot 1-a^2\cdot b\cdot 1 \end{equation*} and by distributing we expand the right side. \begin{equation*} (bc-ba-ac+a^2)\cdot(c-b) =c^2b-b^2c-bac+b^2a-ac^2+acb+a^2c-a^2b \end{equation*} Now we can just check that the two are equal. (\textit{Remark}. This is the $$\nbyn{3}$$ case of \definend{Vandermonde's determinant}\index{Vandermonde!determinant}% \index{determinant!Vandermonde} which arises in applications). \end{answer} \recommended \item Which real numbers $$x$$ make this matrix singular? \begin{equation*}  Jim Hefferon committed Jan 10, 2012 609  \begin{mat}  Jim Hefferon committed Dec 05, 2011 610 611  12-x &4 \\ 8 &8-x  Jim Hefferon committed Jan 10, 2012 612  \end{mat}  Jim Hefferon committed Dec 05, 2011 613 614 615 616 617 618  \end{equation*} \begin{answer} This equation \begin{equation*} 0= \det(  Jim Hefferon committed Jan 10, 2012 619  \begin{mat}  Jim Hefferon committed Dec 05, 2011 620 621  12-x &4 \\ 8 &8-x  Jim Hefferon committed Jan 10, 2012 622  \end{mat}  Jim Hefferon committed Dec 05, 2011 623 624 625 626 627 628 629 630 631 632 633  ) =64-20x+x^2 =(x-16)(x-4) \end{equation*} has roots $$x=16$$ and $$x=4$$. \end{answer} \item \label{exer:ThreeByThreeDetForm} Do the Gaussian reduction to check the formula for $\nbyn{3}$ matrices stated in the preamble to this section. \begin{center}  Jim Hefferon committed Jan 10, 2012 634  $$\begin{mat}  Jim Hefferon committed Dec 05, 2011 635 636 637  a &b &c \\ d &e &f \\ g &h &i  Jim Hefferon committed Jan 10, 2012 638  \end{mat}$$  Jim Hefferon committed Dec 05, 2011 639 640 641 642 643 644  is nonsingular iff $$aei+bfg+cdh-hfa-idb-gec \neq 0$$ \end{center} \begin{answer} We first reduce the matrix to echelon form. To begin, assume that $$a\neq 0$$ and that $$ae-bd\neq 0$$.  Jim Hefferon committed Dec 23, 2013 645 646  \begin{multline*} \grstep{(1/a)\rho_1}  Jim Hefferon committed Jan 10, 2012 647  \begin{mat}  Jim Hefferon committed Dec 05, 2011 648 649 650  1 &b/a &c/a \\ d &e &f \\ g &h &i  Jim Hefferon committed Jan 10, 2012 651  \end{mat}  Jim Hefferon committed Dec 23, 2013 652 653  \grstep[-g\rho_1+\rho_3]{-d\rho_1+\rho_2} \begin{mat}  Jim Hefferon committed Dec 05, 2011 654 655 656  1 &b/a &c/a \\ 0 &(ae-bd)/a &(af-cd)/a \\ 0 &(ah-bg)/a &(ai-cg)/a  Jim Hefferon committed Jan 10, 2012 657  \end{mat} \\  Jim Hefferon committed Dec 23, 2013 658 659  \grstep{(a/(ae-bd))\rho_2} \begin{mat}  Jim Hefferon committed Dec 05, 2011 660 661 662  1 &b/a &c/a \\ 0 &1 &(af-cd)/(ae-bd) \\ 0 &(ah-bg)/a &(ai-cg)/a  Jim Hefferon committed Jan 10, 2012 663  \end{mat}  Jim Hefferon committed Dec 23, 2013 664  \end{multline*}  Jim Hefferon committed Dec 05, 2011 665 666 667  This step finishes the calculation. \begin{equation*} \grstep{((ah-bg)/a)\rho_2+\rho_3}  Jim Hefferon committed Jan 10, 2012 668  \begin{mat}  Jim Hefferon committed Dec 05, 2011 669 670 671  1 &b/a &c/a \\ 0 &1 &(af-cd)/(ae-bd) \\ 0 &0 &(aei+bgf+cdh-hfa-idb-gec)/(ae-bd)  Jim Hefferon committed Jan 10, 2012 672  \end{mat}  Jim Hefferon committed Dec 05, 2011 673 674 675 676 677 678 679 680 681  \end{equation*} Now assuming that $a\neq 0$ and $$ae-bd\neq 0$$, the original matrix is nonsingular if and only if the $$3,3$$ entry above is nonzero. That is, under the assumptions, the original matrix is nonsingular if and only if $aei+bgf+cdh-hfa-idb-gec\neq 0$, as required. We finish by running down what happens if the assumptions that were  Jim Hefferon committed Jan 13, 2012 682  taken for convenience in the prior paragraph do not hold.  Jim Hefferon committed Dec 05, 2011 683 684  First, if $$a\neq 0$$ but $$ae-bd=0$$ then we can swap \begin{equation*}  Jim Hefferon committed Jan 10, 2012 685  \begin{mat}  Jim Hefferon committed Dec 05, 2011 686 687 688  1 &b/a &c/a \\ 0 &0 &(af-cd)/a \\ 0 &(ah-bg)/a &(ai-cg)/a  Jim Hefferon committed Jan 10, 2012 689  \end{mat}  Jim Hefferon committed Dec 05, 2011 690  \grstep{\rho_2\leftrightarrow\rho_3}  Jim Hefferon committed Jan 10, 2012 691  \begin{mat}  Jim Hefferon committed Dec 05, 2011 692 693 694  1 &b/a &c/a \\ 0 &(ah-bg)/a &(ai-cg)/a \\ 0 &0 &(af-cd)/a  Jim Hefferon committed Jan 10, 2012 695  \end{mat}  Jim Hefferon committed Dec 05, 2011 696 697 698 699 700 701 702  \end{equation*} and conclude that the matrix is nonsingular if and only if either $$ah-bg=0$$ or $$af-cd=0$$. The condition $$ah-bg=0$$ or $$af-cd=0$$' is equivalent to the condition $$(ah-bg)(af-cd)=0$$'. Multiplying out and using the case assumption that $ae-bd=0$ to substitute $ae$ for $bd$ gives this.  Jim Hefferon committed Dec 23, 2013 703  \begin{multline*}  Jim Hefferon committed Dec 05, 2011 704  0=ahaf-ahcd-bgaf+bgcd  Jim Hefferon committed Dec 23, 2013 705  =ahaf-ahcd-bgaf+aegc \\  Jim Hefferon committed Dec 05, 2011 706  =a(haf-hcd-bgf+egc)  Jim Hefferon committed Dec 23, 2013 707  \end{multline*}  Jim Hefferon committed Dec 05, 2011 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723  Since $$a\neq 0$$, we have that the matrix is nonsingular if and only if $$haf-hcd-bgf+egc=0$$. Therefore, in this $$a\neq 0$$ and $$ae-bd=0$$ case, the matrix is nonsingular when $$haf-hcd-bgf+egc-i(ae-bd)=0$$. The remaining cases are routine. Do the $$a=0$$ but $$d\neq 0$$ case and the $$a=0$$ and $$d=0$$ but $$g\neq 0$$ case by first swapping rows and then going on as above. The $$a=0$$, $$d=0$$, and $$g=0$$ case is easy\Dash that matrix is singular since the columns form a linearly dependent set, and the determinant comes out to be zero. \end{answer} \item Show that the equation of a line in $$\Re^2$$ thru $$(x_1,y_1)$$  Jim Hefferon committed Jan 13, 2012 724  and $$(x_2,y_2)$$ is given by this determinant.  Jim Hefferon committed Dec 05, 2011 725 726  \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 727  \begin{mat}  Jim Hefferon committed Dec 05, 2011 728 729 730  x &y &1 \\ x_1 &y_1 &1 \\ x_2 &y_2 &1  Jim Hefferon committed Jan 10, 2012 731  \end{mat})=0 \qquad x_1\neq x_2  Jim Hefferon committed Dec 05, 2011 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746  \end{equation*} \begin{answer} Figuring the determinant and doing some algebra gives this. \begin{align*} 0 &=y_1x+x_2y+x_1y_2-y_2x-x_1y-x_2y_1 \\ (x_2-x_1)\cdot y &=(y_2-y_1)\cdot x+x_2y_1-x_1y_2 \\ y &=\frac{y_2-y_1}{x_2-x_1}\cdot x+\frac{x_2y_1-x_1y_2}{x_2-x_1} \end{align*} Note that this is the equation of a line (in particular, in contains the familiar expression for the slope), and note that $$(x_1,y_1)$$ and $$(x_2,y_2)$$ satisfy it. \end{answer}  Jim Hefferon committed Jun 10, 2016 747 748 749 750 751  \item Many people have learned, perhaps in Calculus, this mnemonic for the determinant of a $$\nbyn{3}$$ matrix: first repeat the first two columns, then sum the products on the forward diagonals, and then subtract the products on the backward diagonals.  Jim Hefferon committed Dec 05, 2011 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792  That is, first write \begin{equation*} \begin{pmat}{ccc|cc} h_{1,1} &h_{1,2} &h_{1,3} &h_{1,1} &h_{1,2} \\ h_{2,1} &h_{2,2} &h_{2,3} &h_{2,1} &h_{2,2} \\ h_{3,1} &h_{3,2} &h_{3,3} &h_{3,1} &h_{3,2} \end{pmat} \end{equation*} and then calculate this. \begin{equation*} \begin{array}{l} h_{1,1}h_{2,2}h_{3,3}+h_{1,2}h_{2,3}h_{3,1}+h_{1,3}h_{2,1}h_{3,2} \\ \>-h_{3,1}h_{2,2}h_{1,3}-h_{3,2}h_{2,3}h_{1,1} -h_{3,3}h_{2,1}h_{1,2} \end{array} \end{equation*} \begin{exparts} \partsitem Check that this agrees with the formula given in the preamble to this section. \partsitem Does it extend to other-sized determinants? \end{exparts} \begin{answer} \begin{exparts} \partsitem The comparison with the formula given in the preamble to this section is easy. \partsitem While it holds for $$\nbyn{2}$$ matrices \begin{align*} \begin{pmat}{cc|c} h_{1,1} &h_{1,2} &h_{1,1} \\ h_{2,1} &h_{2,2} &h_{2,1} \end{pmat} &=\begin{array}[t]{@{}l@{}} h_{1,1}h_{2,2}+h_{1,2}h_{2,1} \\ \>-h_{2,1}h_{1,2}-h_{2,2}h_{1,1} \end{array} \\ &=h_{1,1}h_{2,2}-h_{1,2}h_{2,1} \end{align*} it does not hold for $$\nbyn{4}$$ matrices. An example is that this matrix is singular because the second and third rows are equal \begin{equation*}  Jim Hefferon committed Jan 10, 2012 793  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 794 795 796 797  1 &0 &0 &1 \\ 0 &1 &1 &0 \\ 0 &1 &1 &0 \\ -1 &0 &0 &1  Jim Hefferon committed Jan 10, 2012 798  \end{mat}  Jim Hefferon committed Dec 05, 2011 799 800 801  \end{equation*} but following the scheme of the mnemonic does not give zero. \begin{equation*}  Jim Hefferon committed Jan 10, 2012 802  \begin{pmat}{rrrr|rrr}  Jim Hefferon committed Dec 05, 2011 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826  1 &0 &0 &1 &1 &0 &0 \\ 0 &1 &1 &0 &0 &1 &1 \\ 0 &1 &1 &0 &0 &1 &1 \\ -1 &0 &0 &1 &-1 &0 &0 \end{pmat} =\begin{array}[t]{@{}l@{}} 1+0+0+0 \\ \>-(-1)-0-0-0 \end{array} \end{equation*} \end{exparts} \end{answer} \item The \definend{cross product}\index{cross product}\index{vector!cross product} of the vectors \begin{equation*} \vec{x}=\colvec{x_1 \\ x_2 \\ x_3} \qquad \vec{y}=\colvec{y_1 \\ y_2 \\ y_3} \end{equation*} is the vector computed as this determinant. \begin{equation*} \vec{x}\times\vec{y}=  Jim Hefferon committed Jan 10, 2012 827  \det(\begin{mat}  Jim Hefferon committed Dec 05, 2011 828 829 830  \vec{e}_1 &\vec{e}_2 &\vec{e}_3 \\ x_1 &x_2 &x_3 \\ y_1 &y_2 &y_3  Jim Hefferon committed Jan 10, 2012 831  \end{mat})  Jim Hefferon committed Dec 05, 2011 832  \end{equation*}  Jim Hefferon committed Jan 13, 2012 833  Note that the first row's entries are vectors, the vectors from the  Jim Hefferon committed Dec 05, 2011 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871  standard basis for $\Re^3$. Show that the cross product of two vectors is perpendicular to each vector. \begin{answer} The determinant is $(x_2y_3-x_3y_2)\vec{e}_1 +(x_3y_1-x_1y_3)\vec{e}_2 +(x_1y_2-x_2y_1)\vec{e}_3$. To check perpendicularity, we check that the dot product with the first vector is zero \begin{equation*} \colvec{x_1 \\ x_2 \\ x_3} \dotprod \colvec{x_2y_3-x_3y_2 \\ x_3y_1-x_1y_3 \\ x_1y_2-x_2y_1} =x_1x_2y_3-x_1x_3y_2+x_2x_3y_1-x_1x_2y_3+x_1x_3y_2-x_2x_3y_1=0 \end{equation*} and the dot product with the second vector is also zero. \begin{equation*} \colvec{y_1 \\ y_2 \\ y_3} \dotprod \colvec{x_2y_3-x_3y_2 \\ x_3y_1-x_1y_3 \\ x_1y_2-x_2y_1} =x_2y_1y_3-x_3y_1y_2+x_3y_1y_2-x_1y_2y_3+x_1y_2y_3-x_2y_1y_3=0 \end{equation*} \end{answer} \item Prove that each statement holds for $\nbyn{2}$ matrices. \begin{exparts} \partsitem The determinant of a product is the product of the determinants $\det(ST)=\det(S)\cdot\det(T)$. \partsitem If $$T$$ is invertible then the determinant of the inverse is the inverse of the determinant $$\det(T^{-1})=(\,\det(T)\,)^{-1}$$. \end{exparts} Matrices $T$ and $T^\prime$ are \definend{similar}\index{similar} if there is a nonsingular matrix $P$ such that $T^\prime=PTP^{-1}$.  Jim Hefferon committed Nov 08, 2013 872  (We shall look at this relationship in Chapter Five.)  Jim Hefferon committed Dec 05, 2011 873 874 875 876 877 878 879  Show that similar $$\nbyn{2}$$ matrices have the same determinant. \begin{answer} \begin{exparts} \partsitem Plug and chug: the determinant of the product is this \begin{align*}  Jim Hefferon committed Jan 10, 2012 880  \det(\begin{mat}  Jim Hefferon committed Dec 05, 2011 881 882  a &b \\ c &d  Jim Hefferon committed Jan 10, 2012 883 884  \end{mat} \begin{mat}  Jim Hefferon committed Dec 05, 2011 885 886  w &x \\ y &z  Jim Hefferon committed Jan 10, 2012 887  \end{mat} )  Jim Hefferon committed Dec 05, 2011 888  &=  Jim Hefferon committed Jan 10, 2012 889  \det(\begin{mat}  Jim Hefferon committed Dec 05, 2011 890 891  aw+by &ax+bz \\ cw+dy &cx+dz  Jim Hefferon committed Jan 10, 2012 892  \end{mat} ) \\  Jim Hefferon committed Dec 05, 2011 893 894 895 896 897 898 899 900  &= \begin{array}[t]{@{}l@{}} acwx+adwz+bcxy+bdyz \\ \> -acwx-bcwz-adxy-bdyz \end{array} \end{align*} while the product of the determinants is this. \begin{equation*}  Jim Hefferon committed Jan 10, 2012 901  \det(\begin{mat}  Jim Hefferon committed Dec 05, 2011 902 903  a &b \\ c &d  Jim Hefferon committed Jan 10, 2012 904 905  \end{mat}) \cdot\det(\begin{mat}  Jim Hefferon committed Dec 05, 2011 906 907  w &x \\ y &z  Jim Hefferon committed Jan 10, 2012 908  \end{mat})  Jim Hefferon committed Dec 05, 2011 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928  = (ad-bc)\cdot (wz-xy) \end{equation*} Verification that they are equal is easy. \partsitem Use the prior item. \end{exparts} \noindent That similar matrices have the same determinant is immediate from the above two: $\det(PTP^{-1})=\det(P)\cdot\det(T)\cdot\det(P^{-1})$. \end{answer} \recommended \item Prove that the area of this region in the plane \begin{center} \includegraphics{ch4.30} \end{center} is equal to the value of this determinant. \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 929  \begin{mat}  Jim Hefferon committed Dec 05, 2011 930 931  x_1 &x_2 \\ y_1 &y_2  Jim Hefferon committed Jan 10, 2012 932  \end{mat})  Jim Hefferon committed Dec 05, 2011 933 934 935 936  \end{equation*} Compare with this. \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 937  \begin{mat}  Jim Hefferon committed Dec 05, 2011 938 939  x_2 &x_1 \\ y_2 &y_1  Jim Hefferon committed Jan 10, 2012 940  \end{mat})  Jim Hefferon committed Dec 05, 2011 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968  \end{equation*} \begin{answer} One way is to count these areas \begin{center} \includegraphics{ch4.31} \end{center} by taking the area of the entire rectangle and subtracting the area of $A$ the upper-left rectangle, $B$ the upper-middle triangle, $D$ the upper-right triangle, $C$ the lower-left triangle, $E$ the lower-middle triangle, and $F$ the lower-right rectangle $$(x_1+x_2)(y_1+y_2)-x_2y_1-(1/2)x_1y_1-(1/2)x_2y_2 -(1/2)x_2y_2-(1/2)x_1y_1-x_2y_1$$. Simplification gives the determinant formula. This determinant is the negative of the one above; the formula distinguishes whether the second column is counterclockwise from the first. \end{answer} \item Prove that for $$\nbyn{2}$$ matrices, the determinant of a matrix equals the determinant of its transpose. Does that also hold for $$\nbyn{3}$$ matrices? \begin{answer} The computation for $$\nbyn{2}$$ matrices, using the formula quoted in the preamble, is easy. It does also hold for $$\nbyn{3}$$ matrices; the computation is routine. \end{answer}  Jim Hefferon committed Jun 10, 2016 969  \item  Jim Hefferon committed Dec 05, 2011 970 971 972 973  Is the determinant function linear \Dash is $$\det(x\cdot T+y\cdot S)=x\cdot \det(T)+y\cdot \det(S)$$? \begin{answer} No.  Jim Hefferon committed Jun 10, 2016 974  We illustrate with the $\nbyn{2}$ determinant.  Jim Hefferon committed Dec 05, 2011 975 976 977  Recall that constants come out one row at a time. \begin{equation*} \det(  Jim Hefferon committed Jan 10, 2012 978  \begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 979 980  2 &4 \\ 2 &6 \\  Jim Hefferon committed Jan 10, 2012 981  \end{mat})  Jim Hefferon committed Dec 05, 2011 982  =  Jim Hefferon committed Jan 10, 2012 983  2\cdot\det(\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 984 985  1 &2 \\ 2 &6 \\  Jim Hefferon committed Jan 10, 2012 986  \end{mat})  Jim Hefferon committed Dec 05, 2011 987  =  Jim Hefferon committed Jan 10, 2012 988  2\cdot 2\cdot \det(\begin{mat}[r]  Jim Hefferon committed Dec 05, 2011 989 990  1 &2 \\ 1 &3 \\  Jim Hefferon committed Jan 10, 2012 991  \end{mat})  Jim Hefferon committed Dec 05, 2011 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004  \end{equation*} This contradicts linearity (here we didn't need $$S$$, i.e., we can take $S$ to be the matrix of zeros). \end{answer} \item Show that if $$A$$ is $$\nbyn{3}$$ then $$\det(c\cdot A)=c^3\cdot \det(A)$$ for any scalar $$c$$. \begin{answer} Bring out the $$c$$'s one row at a time. \end{answer} \item Which real numbers $$\theta$$ make \begin{equation*}  Jim Hefferon committed Jan 10, 2012 1005  \begin{mat}  Jim Hefferon committed Dec 05, 2011 1006 1007  \cos\theta &-\sin\theta \\ \sin\theta &\cos\theta  Jim Hefferon committed Jan 10, 2012 1008  \end{mat}  Jim Hefferon committed Dec 05, 2011 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022  \end{equation*} singular? Explain geometrically. \begin{answer} There are no real numbers $$\theta$$ that make the matrix singular because the determinant of the matrix $$\cos^2\theta+\sin^2\theta$$ is never $0$, it equals $1$ for all $\theta$. Geometrically, with respect to the standard basis, this matrix represents a rotation of the plane through an angle of $$\theta$$. Each such map is one-to-one \Dash for one thing, it is invertible. \end{answer} \puzzle \item  Jim Hefferon committed Jan 11, 2012 1023  \cite{Monthly55p257}  Jim Hefferon committed Dec 05, 2011 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043  If a third order determinant has elements $$1$$, $$2$$, \ldots, $$9$$, what is the maximum value it may have? \begin{answer} \answerasgiven Let $$P$$ be the sum of the three positive terms of the determinant and $$-N$$ the sum of the three negative terms. The maximum value of $$P$$ is \begin{equation*} 9\cdot 8\cdot 7 +6\cdot 5\cdot 4 +3\cdot 2\cdot 1=630. \end{equation*} The minimum value of $$N$$ consistent with $$P$$ is \begin{equation*} 9\cdot 6\cdot 1 +8\cdot 5\cdot 2 +7\cdot 4\cdot 3=218. \end{equation*} Any change in $$P$$ would result in lowering that sum by more than $$4$$. Therefore $$412$$ the maximum value for the determinant and one form for the determinant is \begin{equation*}  Jim Hefferon committed Jan 10, 2012 1044  \begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1045 1046 1047  9 &4 &2 \\ 3 &8 &6 \\ 5 &1 &7  Jim Hefferon committed Jan 10, 2012 1048  \end{vmat}.  Jim Hefferon committed Dec 05, 2011 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073  \end{equation*} \end{answer} \end{exercises} \subsection{Properties of Determinants} \index{determinant|(}  Jim Hefferon committed Jan 11, 2012 1074 We want a formula  Jim Hefferon committed Dec 05, 2011 1075 1076 to determine whether an $\nbyn{n}$ matrix is nonsingular. We will not begin by stating such a formula.  Jim Hefferon committed Nov 30, 2016 1077 Instead we will begin by considering, for each~$n$,  Jim Hefferon committed Nov 23, 2016 1078 1079 1080 the function that such a formula calculates. We will define this function by a list of properties. We will then prove that a function with these properties exists and is unique,  Jim Hefferon committed Nov 02, 2013 1081 and also describe how to compute it.  Jim Hefferon committed Nov 23, 2016 1082 1083 (Because we will eventually prove this, from the start we will just say $$\det(T)$$' instead of  Jim Hefferon committed Jan 11, 2012 1084 if there is a unique determinant function then $$\det(T)$$'.)  Jim Hefferon committed Dec 05, 2011 1085   Jim Hefferon committed Jun 08, 2012 1086 1087 \begin{definition} \label{def:Det} %<*df:Det>  Jim Hefferon committed Dec 05, 2011 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 A $$\nbyn{n}$$ \definend{determinant\/}\index{determinant!definition}% \index{matrix!determinant} is a function $$\map{\det}{\matspace_{\nbyn{n}}}{\Re}$$ such that \begin{enumerate} \item $\det (\vec{\rho}_1,\dots,k\cdot\vec{\rho}_i + \vec{\rho}_j,\dots,\vec{\rho}_n) =\det (\vec{\rho}_1,\dots,\vec{\rho}_j,\dots,\vec{\rho}_n)$ for $$i\ne j$$ \item $\det (\vec{\rho}_1,\ldots,\vec{\rho}_j, \dots,\vec{\rho}_i,\dots,\vec{\rho}_n) = -\det (\vec{\rho}_1,\dots,\vec{\rho}_i,\dots,\vec{\rho}_j, \dots,\vec{\rho}_n)$ for $$i\ne j$$ \item $\det (\vec{\rho}_1,\dots,k\vec{\rho}_i,\dots,\vec{\rho}_n) = k\cdot \det (\vec{\rho}_1,\dots,\vec{\rho}_i,\dots,\vec{\rho}_n)$  Jim Hefferon committed Dec 16, 2011 1111 1112  for any scalar~$k$ % for $$k\ne 0$$  Jim Hefferon committed Dec 05, 2011 1113 1114 1115 1116 1117 1118 1119 1120  \item $\det(I)=1$ where $$I$$ is an identity matrix \end{enumerate} (the $\vec{\rho}\,$'s are the rows of the matrix). We often write $$\deter{T}$$ for $$\det (T)$$.  Jim Hefferon committed Jun 08, 2012 1121 %  Jim Hefferon committed Dec 05, 2011 1122 1123 1124 \end{definition} \begin{remark} \label{rem:SwapRowsRedun}  Jim Hefferon committed Jun 08, 2012 1125 %<*re:SwapRowsRedun>  Jim Hefferon committed Nov 02, 2013 1126 Condition~(2) is redundant since  Jim Hefferon committed Dec 23, 2013 1127 1128 1129 1130 1131 1132 1133  \begin{equation*} T\grstep{\rho_i+\rho_j} \repeatedgrstep{-\rho_j+\rho_i} \repeatedgrstep{\rho_i+\rho_j} \repeatedgrstep{-\rho_i} \hat{T} \end{equation*}  Jim Hefferon committed Dec 05, 2011 1134 swaps rows $$i$$ and~$$j$$.  Jim Hefferon committed Nov 21, 2014 1135 We have listed it for  Jim Hefferon committed Dec 23, 2013 1136 consistency with the Gauss's Method presentation in earlier chapters.  Jim Hefferon committed Jun 08, 2012 1137 %  Jim Hefferon committed Nov 02, 2013 1138 \end{remark}  Jim Hefferon committed Dec 05, 2011 1139   Jim Hefferon committed Nov 02, 2013 1140 1141 1142 1143 \begin{remark} Condition~(3) does not have a $k\neq 0$ restriction, although the Gauss's Method operation of multiplying a row by~$k$ does have it.  Jim Hefferon committed Jan 12, 2012 1144 % \nearbylemma{le:IdenRowsDetZero} below  Jim Hefferon committed Nov 02, 2013 1145 The next result shows that we do not need that restriction here.  Jim Hefferon committed Oct 24, 2012 1146 \end{remark}  Jim Hefferon committed Dec 05, 2011 1147   Jim Hefferon committed Nov 02, 2013 1148 1149 1150 1151 1152 1153 1154 1155 % % \begin{remark} % The previous subsection's plan % asks for the determinant of an echelon form matrix to be the product % down the diagonal. % The next result shows that in the presence of the other % three, condition~(4) gives that. % % \end{remark}  Jim Hefferon committed Dec 05, 2011 1156 \begin{lemma} \label{le:IdenRowsDetZero}  Jim Hefferon committed Jun 08, 2012 1157 %<*lm:IdenRowsDetZero>  Jim Hefferon committed Dec 05, 2011 1158 1159 1160 1161 A matrix with two identical rows has a determinant of zero. A matrix with a zero row has a determinant of zero. A matrix is nonsingular if and only if its determinant is nonzero. The determinant of an echelon form matrix is the product down its diagonal.  Jim Hefferon committed Jun 08, 2012 1162 %  Jim Hefferon committed Dec 05, 2011 1163 1164 1165 \end{lemma} \begin{proof}  Jim Hefferon committed Jun 08, 2012 1166 %<*pf:IdenRowsDetZero0>  Jim Hefferon committed Nov 02, 2013 1167 To verify the first sentence swap the two equal rows.  Jim Hefferon committed Jan 13, 2012 1168 1169 The sign of the determinant changes but the matrix is the same and so its determinant is the same.  Jim Hefferon committed Dec 05, 2011 1170 Thus the determinant is zero.  Jim Hefferon committed Jun 08, 2012 1171 %  Jim Hefferon committed Dec 05, 2011 1172   Jim Hefferon committed Jun 08, 2012 1173 %<*pf:IdenRowsDetZero1>  Jim Hefferon committed Nov 02, 2013 1174 1175 For the second sentence multiply the zero row by two.  Jim Hefferon committed Jan 12, 2012 1176 That doubles the determinant but it also  Jim Hefferon committed Nov 02, 2013 1177 leaves the row unchanged, and hence  Jim Hefferon committed Jan 12, 2012 1178 1179 leaves the determinant unchanged. Thus the determinant must be zero.  Jim Hefferon committed Jun 08, 2012 1180 %  Jim Hefferon committed Dec 05, 2011 1181   Jim Hefferon committed Jun 08, 2012 1182 %<*pf:IdenRowsDetZero2>  Jim Hefferon committed Nov 02, 2013 1183 1184 1185 1186 Do Gauss-Jordan reduction for the third sentence, $T \rightarrow\cdots\rightarrow\hat{T}$. By the first three properties  Jim Hefferon committed Dec 05, 2011 1187 1188 the determinant of $T$ is zero if and only if the determinant of $\hat{T}$ is zero  Jim Hefferon committed Jan 11, 2012 1189 (although the two could differ in sign or magnitude).  Jim Hefferon committed Nov 02, 2013 1190 A nonsingular matrix $T$ Gauss-Jordan reduces to an identity matrix  Jim Hefferon committed Dec 05, 2011 1191 1192 1193 and so has a nonzero determinant. A singular $T$ reduces to a $\hat{T}$ with a zero row; by the second sentence of this lemma its determinant is zero.  Jim Hefferon committed Jun 08, 2012 1194 %  Jim Hefferon committed Dec 05, 2011 1195   Jim Hefferon committed Jun 08, 2012 1196 %<*pf:IdenRowsDetZero3>  Jim Hefferon committed Jan 12, 2012 1197 1198 1199 The fourth sentence has two cases. If the echelon form matrix is singular then it has a zero row.  Jim Hefferon committed Nov 02, 2013 1200 Thus it has a zero on its diagonal and  Jim Hefferon committed Dec 05, 2011 1201 the product down its diagonal is zero.  Jim Hefferon committed Nov 02, 2013 1202 By the third sentence of this result the determinant is zero and  Jim Hefferon committed Jan 12, 2012 1203 therefore this matrix's determinant equals the  Jim Hefferon committed Dec 05, 2011 1204 product down its diagonal.  Jim Hefferon committed Jun 08, 2012 1205 %  Jim Hefferon committed Dec 05, 2011 1206   Jim Hefferon committed Jun 08, 2012 1207 %<*pf:IdenRowsDetZero4>  Jim Hefferon committed Jan 12, 2012 1208 If the echelon form matrix is nonsingular then none of its diagonal entries  Jim Hefferon committed Nov 21, 2014 1209 1210 1211 is zero. This means that we can divide by those entries and use condition~(3) to get $1$'s on the diagonal.  Jim Hefferon committed Dec 05, 2011 1212 \begin{equation*}  Jim Hefferon committed Jan 10, 2012 1213  \begin{vmat}  Jim Hefferon committed Dec 05, 2011 1214 1215 1216 1217  t_{1,1} &t_{1,2} & &t_{1,n} \\ 0 &t_{2,2} & &t_{2,n} \\ & &\ddots \\ 0 & & &t_{n,n}  Jim Hefferon committed Jan 10, 2012 1218  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1219 1220  = t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot  Jim Hefferon committed Jan 10, 2012 1221  \begin{vmat}  Jim Hefferon committed Dec 05, 2011 1222 1223 1224 1225  1 &t_{1,2}/t_{1,1} & &t_{1,n}/t_{1,1} \\ 0 &1 & &t_{2,n}/t_{2,2} \\ & &\ddots \\ 0 & & &1  Jim Hefferon committed Jan 10, 2012 1226  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1227 \end{equation*}  Jim Hefferon committed Nov 02, 2013 1228 Then the Jordan half of Gauss-Jordan elimination leaves the identity matrix.  Jim Hefferon committed Dec 05, 2011 1229 1230 1231 \begin{equation*} = t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot  Jim Hefferon committed Jan 10, 2012 1232  \begin{vmat}  Jim Hefferon committed Dec 05, 2011 1233 1234 1235 1236  1 &0 & &0 \\ 0 &1 & &0 \\ & &\ddots \\ 0 & & &1  Jim Hefferon committed Jan 10, 2012 1237  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1238 1239 1240  = t_{1,1}\cdot t_{2,2}\cdots t_{n,n}\cdot 1 \end{equation*}  Jim Hefferon committed Jan 12, 2012 1241 1242 So in this case also, the determinant is the product down the diagonal.  Jim Hefferon committed Jun 08, 2012 1243 %  Jim Hefferon committed Dec 05, 2011 1244 1245 \end{proof}  Jim Hefferon committed Jan 12, 2012 1246 That gives us a way to compute the value of a determinant  Jim Hefferon committed Dec 05, 2011 1247 1248 function on a matrix: do Gaussian reduction, keeping track of any changes of  Jim Hefferon committed Jan 13, 2012 1249 sign caused by row swaps and any scalars that we factor out,  Jim Hefferon committed Jan 11, 2012 1250 and finish by multiplying  Jim Hefferon committed Dec 05, 2011 1251 down the diagonal of the echelon form result.  Jim Hefferon committed Nov 02, 2013 1252 1253 This algorithm is as fast as Gauss's Method and so is practical on all of the matrices that we  Jim Hefferon committed Jan 12, 2012 1254 will see.  Jim Hefferon committed Dec 05, 2011 1255 1256 1257  \begin{example} Doing $$\nbyn{2}$$ determinants  1258 with Gauss's Method  Jim Hefferon committed Dec 05, 2011 1259 \begin{equation*}  Jim Hefferon committed Jan 10, 2012 1260  \begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1261 1262  2 &4 \\ -1 &3  Jim Hefferon committed Jan 10, 2012 1263  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1264  =  Jim Hefferon committed Jan 10, 2012 1265  \begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1266 1267  2 &4 \\ 0 &5  Jim Hefferon committed Jan 10, 2012 1268  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1269 1270  =10 \end{equation*}  Jim Hefferon committed Jan 13, 2012 1271 doesn't give a big time savings  Jim Hefferon committed Jan 12, 2012 1272 1273 because the $\nbyn{2}$ determinant formula is easy. However, a $$\nbyn{3}$$ determinant is often easier to calculate  Jim Hefferon committed Nov 02, 2013 1274 with Gauss's Method than with its formula.  Jim Hefferon committed Dec 05, 2011 1275 \begin{equation*}  Jim Hefferon committed Jan 10, 2012 1276  \begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1277 1278 1279  2 &2 &6 \\ 4 &4 &3 \\ 0 &-3 &5  Jim Hefferon committed Jan 10, 2012 1280  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1281  =  Jim Hefferon committed Jan 10, 2012 1282  \begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1283 1284 1285  2 &2 &6 \\ 0 &0 &-9 \\ 0 &-3 &5  Jim Hefferon committed Jan 10, 2012 1286  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1287  =  Jim Hefferon committed Jan 10, 2012 1288  -\begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1289 1290 1291  2 &2 &6 \\ 0 &-3 &5 \\ 0 &0 &-9  Jim Hefferon committed Jan 10, 2012 1292  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1293 1294 1295 1296 1297  =-54 \end{equation*} \end{example} \begin{example}  Jim Hefferon committed Jan 12, 2012 1298 Determinants bigger than $\nbyn{3}$ go  1299 quickly with the Gauss's Method procedure.  Jim Hefferon committed Dec 05, 2011 1300 \begin{equation*}  Jim Hefferon committed Jan 10, 2012 1301  \begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1302 1303 1304 1305  1 &0 &1 &3 \\ 0 &1 &1 &4 \\ 0 &0 &0 &5 \\ 0 &1 &0 &1  Jim Hefferon committed Jan 10, 2012 1306  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1307  =  Jim Hefferon committed Jan 10, 2012 1308  \begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1309 1310 1311 1312  1 &0 &1 &3 \\ 0 &1 &1 &4 \\ 0 &0 &0 &5 \\ 0 &0 &-1 &-3  Jim Hefferon committed Jan 10, 2012 1313  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1314  =  Jim Hefferon committed Jan 10, 2012 1315  -\begin{vmat}[r]  Jim Hefferon committed Dec 05, 2011 1316 1317 1318 1319  1 &0 &1 &3 \\ 0 &1 &1 &4 \\ 0 &0 &-1 &-3 \\ 0 &0 &0 &5  Jim Hefferon committed Jan 10, 2012 1320  \end{vmat}  Jim Hefferon committed Dec 05, 2011 1321 1322 1323 1324 1325 1326 1327  =-(-5)=5 \end{equation*} \end{example} The prior example illustrates an important point. Although we have not yet found a $\nbyn{4}$ determinant formula, if one exists then we know what value it gives to the matrix \Dash  Jim Hefferon committed Nov 23, 2016 1328 if there is a function with properties (1)\,--\,(4) then on the above  Jim Hefferon committed Dec 05, 2011 1329 1330 matrix the function must return $5$.  Jim Hefferon committed Jun 08, 2012 1331 1332 \begin{lemma} \label{lm:DetFcnIsUnique} %<*lm:DetFcnIsUnique>  Jim Hefferon committed Dec 05, 2011 1333 1334 For each $n$, if there is an $\nbyn{n}$ determinant function then it is unique.  Jim Hefferon committed Jun 08, 2012 1335 %  Jim Hefferon committed Dec 05, 2011 1336 1337 1338 \end{lemma} \begin{proof}  Jim Hefferon committed Jun 08, 2012 1339 %<*pf:DetFcnIsUnique>  1340 Perform Gauss's Method on the  Jim Hefferon committed Nov 02, 2013 1341 1342 1343 matrix, keeping track of how the sign alternates on row swaps and any row-scaling factors, and then multiply down the diagonal of the echelon form result.  Jim Hefferon committed Dec 05, 2011 1344 By the definition and the lemma,  Jim Hefferon committed Jun 08, 2012 1345 all $\nbyn{n}$ determinant functions must return this value on the matrix.  Jim Hefferon committed Jun 08, 2012 1346 %  Jim Hefferon committed Dec 05, 2011 1347 1348 1349 \end{proof} The if there is an $\nbyn{n}$ determinant function'  Jim Hefferon committed Nov 02, 2013 1350 emphasizes that,  Jim Hefferon committed Dec 05, 2011 1351 although we can  1352 use Gauss's Method to compute the only value that a determinant function  Jim Hefferon committed Dec 05, 2011 1353 could possibly return,  Jim Hefferon committed Jan 12, 2012 1354 we haven't yet shown that such a function exists for all $n$.  Jim Hefferon committed Nov 02, 2013 1355 The rest of this section does that.  Jim Hefferon committed Dec 05, 2011 1356 1357 1358 1359  \begin{exercises} \item[{\em For these, assume that an $\nbyn{n}$ determinant function exists for all $n$.}]  Jim Hefferon committed Jun 11, 2016 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390  \recommended \item Find each determinant by performing one row operation. \begin{exparts*} \partsitem $$\begin{vmat}[r] 1 &-2 &1 &2 \\ 2 &-4 &1 &0 \\ 0 &0 &-1 &0 \\ 0 &0 &0 &5 \end{vmat}$$ \partsitem $$\begin{vmat}[r] 1 &1 &-2 \\ 0 &0 &4 \\ 0 &3 &-6 \end{vmat}$$ \end{exparts*} \begin{answer} \begin{exparts} \partsitem Do $2\rho_1+\rho_2$ to get echelon form, and then multiply down the diagonal. The determinant is~$0$. % sage: M = matrix(QQ, [[1,-2,1,2], [2,-4,1,0], [0,0,-1,0], [0,0,0,5]]) % sage: M.determinant() % 0 \partsitem Swapping the second and third rows brings the system to echelon form (and changes the sign of the determinant). Multiplying down the diagonal gives~$12$, so the determinant of the given matrix is~$-12$. % sage: M = matrix(QQ, [[1,1,-2], [0,0,4], [0,3,-6]]) % sage: M.determinant() % -12 \end{exparts} \end{answer} ` Jim Hefferon committed Dec 05, 2011<