Skip to content
Next
Projects
Groups
Snippets
Help
Loading...
Help
Support
Submit feedback
Contribute to GitLab
Switch to GitLab Next
Sign in / Register
Toggle navigation
L
linearalgebra
Project
Project
Details
Activity
Releases
Cycle Analytics
Insights
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Packages
Packages
Container Registry
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Jim Hefferon
linearalgebra
Commits
8018b48a
Commit
8018b48a
authored
Mar 01, 2012
by
Jim Hefferon
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
did the usage pass over gr fiels
parent
783c7ed1
Changes
10
Hide whitespace changes
Inline
Sidebyside
Showing
10 changed files
with
14085 additions
and
13967 deletions
+14085
13967
book.pdf
book.pdf
+13854
13848
bookans.tex
bookans.tex
+41
15
ch1.mp
ch1.mp
+5
0
ch5.mp
ch5.mp
+46
0
gr1.tex
gr1.tex
+55
53
gr2.tex
gr2.tex
+31
29
gr3.tex
gr3.tex
+18
17
jhanswer.pdf
jhanswer.pdf
+0
0
pref.tex
pref.tex
+5
5
search.tex
search.tex
+30
0
No files found.
book.pdf
View file @
8018b48a
No preview for this file type
bookans.tex
View file @
8018b48a
\chapter{Chapter One: Linear Systems}
\subsection{One.I.1: Linear Systems}
\begin{ans}{One.I.1.17}
Gauss' method can be performed
in different ways, so these simply
We can perform Gauss' method can
in different ways, so these simply
exhibit one possible way to get the answer.
\begin{exparts}
\partsitem Gauss' method
...
...
@@ 214,7 +214,7 @@
\end{ans}
\begin{ans}{One.I.1.24}
Yes.
For example, the fact that
the same reaction can be performed
For example, the fact that
we can have the same reaction
in two different flasks shows that twice any solution is another,
different, solution (if a physical reaction occurs then there must be
at least one nonzero solution).
...
...
@@ 282,7 +282,7 @@
\partsitem Yes, by inspection the given equation results from
\( \rho_1+\rho_2 \).
\partsitem No.
The
given equation is satisfied by the pair \( (1,1) \)
.
The
pair \( (1,1) \) satisfies the given equation
.
However, that pair
does not satisfy the first equation in the system.
\partsitem Yes.
...
...
@@ 468,7 +468,7 @@
again by the definition of `satisfies'.
Subtract \( k \) times the \( i \)th equation from the \( j \)th
equation
(remark:~here is where
\( i\neq j \) is needed
; if \( i=j \) then the two
(remark:~here is where
we need \( i\neq j \)
; if \( i=j \) then the two
\( d_i \)'s above are not equal) to
get that the previous compound statement holds if and only if
\begin{align*}
...
...
@@ 522,7 +522,7 @@
\end{ans}
\begin{ans}{One.I.1.34}
Swapping rows is reversed
by swapping back.
Reverse a row swap
by swapping back.
\begin{eqnarray*}
\begin{linsys}{3}
a_{1,1}x_1 &+ &\cdots &+ &a_{1,n}x_n &= &d_1 \\
...
...
@@ 1704,8 +1704,8 @@
\end{ans}
\begin{ans}{One.I.3.23}
In this case the solution set is all of \( \Re^n \) and
can be
express
ed
in the required form
In this case the solution set is all of \( \Re^n \) and
we can
express
it
in the required form
\begin{equation*}
\set{c_1\colvec[r]{1 \\ 0 \\ \vdotswithin{1} \\ 0}
+c_2\colvec[r]{0 \\ 1 \\ \vdotswithin{0} \\ 0}
...
...
@@ 1759,7 +1759,7 @@
Gauss' method will use only rationals (e.g.,
\( (m/n)\rho_i+\rho_j \)).
Thus
the solution set can be expressed
using only rational numbers as
Thus
we can express the solution set
using only rational numbers as
the components of each vector.
Now the particular solution is all rational.
...
...
@@ 1858,7 +1858,7 @@
\colvec[r]{1 \\ 0 \\ 4}
=\colvec[r]{3 \\ 0 \\ 7}
\end{equation*}
that plane can be described
in this way.
we can describe that plane
in this way.
\begin{equation*}
\set{\colvec[r]{1 \\ 0 \\ 4}
+m\colvec[r]{1 \\ 1 \\ 2}
...
...
@@ 2086,11 +2086,11 @@
\end{ans}
\begin{ans}{One.II.2.14}
T
he set
We could describe t
he set
\begin{equation*}
\set{\colvec{x \\ y \\ z}\suchthat 1x+3y1z=0}
\end{equation*}
can also be described
with parameters in this way.
with parameters in this way.
\begin{equation*}
\set{\colvec[r]{3 \\ 1 \\ 0}y+\colvec[r]{1 \\ 0 \\ 1}z
\suchthat y,z\in\Re}
...
...
@@ 2374,7 +2374,7 @@
\end{ans}
\begin{ans}{One.II.2.30}
The angle between \( (a) \) and \( (b) \) is found
We can find the angle between \( (a) \) and \( (b) \)
(for \( a,b\neq 0 \)) with
\begin{equation*}
\arccos(\frac{ab}{\sqrt{a^2}\sqrt{b^2}}).
...
...
@@ 2468,7 +2468,7 @@
The \( \vec{z}_1+\vec{z}_2=\zero \) case is easy.
For the rest, by the definition of angle,
we will be
done
if we show this.
we will be
finished
if we show this.
\begin{equation*}
\frac{\vec{z}_1\dotprod(\vec{z}_1+\vec{z}_2)}{
\norm{\vec{z}_1}\,\norm{\vec{z}_1+\vec{z}_2} }
...
...
@@ 3015,7 +3015,7 @@
+\colvec[r]{1 \\ 1 \\ 0 \\ 1}w
\suchthat z,w\in\Re}
\end{equation*}
(of course,
the zero vector could be omitted
from the description).
(of course,
we could omit the zero vector
from the description).
\partsitem The ``Jordan'' half
\begin{equation*}
\grstep{(1/7)\rho_2}
...
...
@@ 3210,7 +3210,7 @@
For symmetric, we assume $A$ has the same sum of entries as~$B$
and obviously then $B$ has the same sum of entries as~$A$.
Transitivity is no harder\Dash if $A$ has the same sum of entries
as $B$ and $B$ has the same sum of entries as $C$ then
clearly
as $B$ and $B$ has the same sum of entries as $C$ then
$A$ has the same as $C$.
\end{ans}
...
...
@@ 29481,6 +29481,32 @@ ans = 0.017398
\end{ans}
\begin{ans}{2}
This \textit{Sage} session gives equal values.
\begin{lstlisting}
sage: H=matrix(QQ,[[0,0,0,1], [1,0,0,0], [0,1,0,0], [0,0,1,0]])
sage: S=matrix(QQ,[[1/4,1/4,1/4,1/4], [1/4,1/4,1/4,1/4], [1/4,1/4,1/4,1/4], [1/4,1/4,1/4,1/4]])
sage: alpha=0.85
sage: G=alpha*H+(1alpha)*S
sage: I=matrix(QQ,[[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]])
sage: N=GI
sage: 1200*N
[1155.00000000000 45.0000000000000 45.0000000000000 1065.00000000000]
[ 1065.00000000000 1155.00000000000 45.0000000000000 45.0000000000000]
[ 45.0000000000000 1065.00000000000 1155.00000000000 45.0000000000000]
[ 45.0000000000000 45.0000000000000 1065.00000000000 1155.00000000000]
sage: M=matrix(QQ,[[1155,45,45,1065], [1065,1155,45,45], [45,1065,1155,45], [45,45,1065,1155]])
sage: M.echelon_form()
[ 1 0 0 1]
[ 0 1 0 1]
[ 0 0 1 1]
[ 0 0 0 0]
sage: v=vector([1,1,1,1])
sage: (v/v.norm()).n()
(0.500000000000000, 0.500000000000000, 0.500000000000000, 0.500000000000000)
\end{lstlisting}
\end{ans}
\begin{ans}{3}
We have this.
\begin{equation*}
H=\begin{mat}
ch1.mp
View file @
8018b48a
...
...
@@ 671,6 +671,11 @@ beginfig(19) % 2flat in R3; 2x+y+z=4, paramatized
%drawarrow z8z5 withcolor shading_color;
drawarrow z5z6 withcolor shading_color;
drawarrow z5z7 withcolor shading_color;
label.rt(btex {\hspace*{0.05in} \scriptsize
$ P=\set{\colvec{x \\ y \\ z}=\colvec[r]{2 \\ 0 \\ 0}
+y\cdot\colvec[r]{1/2 \\ 1 \\ 0}
+z\cdot\colvec[r]{1/2 \\ 0 \\ 1}\suchthat y,z\in\Re}$} etex,.618[z3,z4]);
endfig;
...
...
ch5.mp
View file @
8018b48a
...
...
@@ 470,6 +470,52 @@ beginfig(10); % m
drawarrow subpath(xpart(times433)+.05,xpart(times434).05) of p43;
endfig;
% Search: every site points in a circle
%
beginfig(11); %
%numeric u; %scaling factor
%numeric v; %vertical scaling factor
%numeric w; %horizontal scaling factor
numeric circlescale; circlescale=19pt;
path node; node=fullcircle scaled circlescale;
path n[]; % node paths
pickup pencircle scaled line_width_light;
z1=(0w,6v);
n1=node shifted z1;
draw n1; label(btex \small $p_1$ etex,z1);
z2=(9w,y1);
n2=node shifted z2;
draw n2; label(btex \small $p_2$ etex,z2);
z3=(x2,0v);
n3=node shifted z3;
draw n3; label(btex \small $p_3$ etex,z3);
z4=(x1,y3);
n4=node shifted z4;
draw n4; label(btex \small $p_4$ etex,z4);
path p[], q[];
pair times[]; % intersection times
% arrow from p1 to p2
p12=z1z2;
times121=p12 intersectiontimes n1;
times122=p12 intersectiontimes n2;
drawarrow subpath(xpart(times121)+.05,xpart(times122).05) of p12;
% arrow from p2 to p3
p23=z2z3;
times232=p23 intersectiontimes n2;
times233=p23 intersectiontimes n3;
drawarrow subpath(xpart(times232)+.05,xpart(times233).05) of p23;
% arrow from p3 to p4
p34=z3z4;
times343=p34 intersectiontimes n3;
times344=p34 intersectiontimes n4;
drawarrow subpath(xpart(times343)+.05,xpart(times344).05) of p34;
% arrow from p4 to p1
p41=z4z1;
times414=p41 intersectiontimes n4;
times411=p41 intersectiontimes n1;
drawarrow subpath(xpart(times414)+.05,xpart(times411).05) of p41;
endfig;
...
...
gr1.tex
View file @
8018b48a
...
...
@@ 9,8 +9,8 @@ give a sense of how they arise.
The first example is from Physics.
\index
{
Physics problem
}
Suppose that we have three objects,
one with a mass known to be 2~kg
.
We are asked
to find the unknown masses.
one with a mass known to be 2~kg
and we want
to find the unknown masses.
Suppose further that
experimentation with a meter stick produces these two balances.
\begin{center}
...
...
@@ 130,7 +130,7 @@ the system.
We don't need
guesswork or good luck;
there is an algorithm that always works.
This algorithm is
called
This algorithm is
\definend
{
Gauss' method
}
\index
{
Gauss' method
}
%
\index
{
system of linear equations!Gauss' method
}
(or
\definend
{
Gaussian elimination
}
\index
{
Gaussian elimination
}
%
...
...
@@ 229,7 +229,7 @@ can change the solution set of the system.
Similarly, adding a multiple of a row to itself is not allowed because
adding
\(

1
\)
times the row to itself has the effect of multiplying the row
by
\(
0
\)
.
Finally,
swapping a row with itself is disallowed
Finally,
we disallow swapping a row with itself
to make some results in the fourth chapter easier to state and remember,
and also because it's pointless.
...
...
@@ 306,7 +306,7 @@ When writing out the calculations, we will
abbreviate `row
\(
i
\)
' by `
\(
\rho
_
i
\)
'.
For instance, we will denote a row combination operation by
\(
k
\rho
_
i
+
\rho
_
j
\)
,
with the row that
is changed
written second.
with the row that
changes
written second.
To save writing we will
often combine addition steps when they use the same
$
\rho
_
i
$
; see the
next example.
...
...
@@ 373,7 +373,7 @@ chapter, Gauss' method gives this.
\end{linsys}
\end{eqnarray*}
So
\(
c
=
4
\)
, and backsubstitution gives that
\(
h
=
1
\)
.
(
The Chemistry problem is solved
later.)
(
We will solve the Chemistry problem
later.)
\end{example}
\begin{example}
...
...
@@ 464,7 +464,7 @@ Backsubstitution gives \( w=1 \), \( z=2 \) , \( y=1 \), and \( x=1 \).
The row rescaling operation is
not needed, strictly speaking, to solve linear systems.
It is included here because
we will use it
But
we will use it
later in this chapter as part of a variation on Gauss' method,
the GaussJordan method.
...
...
@@ 550,7 +550,7 @@ that is not what causes the inconsistency\Dash
\nearbyexample
{
ex:MoreEqsThanUnks
}
has more equations than unknowns and yet is consistent.
Nor is having more equations than unknowns necessary for
inconsistency, as
is illustrated by this inconsistent system with
the
inconsistency, as
we see with this inconsistent system that has
the
same number of equations as unknowns.
\begin{eqnarray*}
\begin{linsys}
{
2
}
...
...
@@ 594,19 +594,19 @@ because we do not get a contradictory equation.
\end{example}
Don't be fooled by the final system in that example.
A `
$
0
=
0
$
' equation
does not
signal that a system has many solutions.
A `
$
0
=
0
$
' equation
it not the
signal that a system has many solutions.
\begin{example}
\label
{
ex:NoZerosInfManySols
}
The absence of a `
\(
0
=
0
\)
' does not keep a system from having
many different solutions.
This system is in echelon form
has no `
$
0
=
0
$
', but has infinitely many solutions.
\begin{equation*}
\begin{linsys}
{
3
}
x
&
+
&
y
&
+
&
z
&
=
&
0
\\
&
&
y
&
+
&
z
&
=
&
0
\end{linsys}
\end{equation*}
has no `
$
0
=
0
$
', but has infinitely many solutions.
Some solutions are:~
$
(
0
,
1
,

1
)
$
,
$
(
0
,
1
/
2
,

1
/
2
)
$
,
$
(
0
,
0
,
0
)
$
, and
$
(
0
,

\pi
,
\pi
)
$
.
There are infinitely many solutions because
...
...
@@ 618,7 +618,7 @@ many solutions.
\nearbyexample
{
ex:MoreEqsThanUnks
}
shows that.
So does this system, which does not have
any solutions at all despite that
when it is brought to
echelon form it has a `
$
0
=
0
$
' row.
in
echelon form it has a `
$
0
=
0
$
' row.
\begin{eqnarray*}
\begin{linsys}
{
3
}
2x
&
&
&

&
2z
&
=
&
6
\\
...
...
@@ 689,7 +689,7 @@ a no response by showing that no solution exists.}
\end
{
linsys
}$
\end{exparts*}
\begin{answer}
Gauss' method can be performed
in different ways, so these simply
We can perform Gauss' method can
in different ways, so these simply
exhibit one possible way to get the answer.
\begin{exparts}
\partsitem
Gauss' method
...
...
@@ 848,15 +848,16 @@ a no response by showing that no solution exists.}
\end{exparts}
\end{answer}
\recommended
\item
There are methods for solving linear system
s other
We can solve linear systems by method
s other
than Gauss' method.
One often taught in high school is to solve one of the
equations for a variable, then substitute the resulting expression into
other equations.
Th
at step is repeated
until there is an equation with only one
Th
en we repeat that step
until there is an equation with only one
variable.
From that, the first number in the solution is derived, and then
backsubstitution can be done.
From that we get the first number in the solution and then we get the
rest with
backsubstitution.
This method takes longer than Gauss' method, since it involves
more arithmetic operations, and is also more
likely to lead to errors.
...
...
@@ 1023,7 +1024,7 @@ a no response by showing that no solution exists.}
a balance the reaction problem
\Dash
have infinitely many solutions?
\begin{answer}
Yes.
For example, the fact that
the same reaction can be performed
For example, the fact that
we can have the same reaction
in two different flasks shows that twice any solution is another,
different, solution (if a physical reaction occurs then there must be
at least one nonzero solution).
...
...
@@ 1097,7 +1098,7 @@ a no response by showing that no solution exists.}
Gauss' method works by combining the equations in a system to make new
equations.
\begin{exparts}
\partsitem
Can
the equation
\(
3
x

2
y
=
5
\)
be derived
by a sequence of
\partsitem
Can
we derive the equation
\(
3
x

2
y
=
5
\)
by a sequence of
Gaussian reduction steps from the equations in this system?
\begin{equation*}
\begin{linsys}
{
2
}
...
...
@@ 1105,7 +1106,7 @@ a no response by showing that no solution exists.}
4x
&

&
y
&
=
&
6
\end{linsys}
\end{equation*}
\partsitem
Can
the equation
\(
5
x

3
y
=
2
\)
be derived by
a sequence of
\partsitem
Can
we derive the equation
\(
5
x

3
y
=
2
\)
with
a sequence of
Gaussian reduction steps from the equations in this system?
\begin{equation*}
\begin{linsys}
{
2
}
...
...
@@ 1113,7 +1114,7 @@ a no response by showing that no solution exists.}
3x
&
+
&
y
&
=
&
4
\end{linsys}
\end{equation*}
\partsitem
Can
the equation
\(
6
x

9
y
+
5
z
=
2
\)
be derived
\partsitem
Can
we derive
\(
6
x

9
y
+
5
z
=
2
\)
by a sequence of
Gaussian reduction steps from the equations in the system?
\begin{equation*}
...
...
@@ 1128,7 +1129,7 @@ a no response by showing that no solution exists.}
\partsitem
Yes, by inspection the given equation results from
\(

\rho
_
1
+
\rho
_
2
\)
.
\partsitem
No.
The
given equation is satisfied by the pair
\(
(
1
,
1
)
\)
.
The
pair
\(
(
1
,
1
)
\)
satisfies the given equation
.
However, that pair
does not satisfy the first equation in the system.
\partsitem
Yes.
...
...
@@ 1345,7 +1346,7 @@ a no response by showing that no solution exists.}
again by the definition of `satisfies'.
Subtract
\(
k
\)
times the
\(
i
\)
th equation from the
\(
j
\)
th
equation
(remark:~here is where
\(
i
\neq
j
\)
is needed
; if
\(
i
=
j
\)
then the two
(remark:~here is where
we need
\(
i
\neq
j
\)
; if
\(
i
=
j
\)
then the two
\(
d
_
i
\)
's above are not equal) to
get that the previous compound statement holds if and only if
\begin{align*}
...
...
@@ 1390,7 +1391,7 @@ a no response by showing that no solution exists.}
\recommended
\item
Are any of the operations used in Gauss' method
redundant?
That is, can
any of the operations be made
from a combination
That is, can
we make any of the operations
from a combination
of the others?
\begin{answer}
Yes.
...
...
@@ 1409,7 +1410,7 @@ a no response by showing that no solution exists.}
$
S
_
1
\rightarrow
S
_
2
$
then there is a row operation to go back
$
S
_
2
\rightarrow
S
_
1
$
.
\begin{answer}
Swapping rows is reversed
by swapping back.
Reverse a row swap
by swapping back.
\begin{eqnarray*}
\begin{linsys}
{
3
}
a
_{
1,1
}
x
_
1
&
+
&
\cdots
&
+
&
a
_{
1,n
}
x
_
n
&
=
&
d
_
1
\\
...
...
@@ 1664,7 +1665,7 @@ a no response by showing that no solution exists.}
\subsection
{
Describing the Solution Set
}
A linear system with a unique solution has a solution set with one element.
A linear system with no solution has a solution set that is empty.
%
In these cases the solution set is easy to describe.
In these cases the solution set is easy to describe.
Solution sets are a challenge to describe only when they contain many elements.
\begin{example}
...
...
@@ 1692,9 +1693,9 @@ not all of the variables are leading variables.
\nearbytheorem
{
th:GaussMethod
}
shows that a triple
$
(
x,y,z
)
$
satisfies the first system if and only if it satisfies the
third.
Thus the solution set
Thus
we can describe
the solution set
$
\set
{
(
x,y,z
)
\suchthat\text
{$
2x+z=3
$
and
$
xyz=1
$
and
$
3xy=4
$}}$
can also be described
as
as
$
\set
{
(
x,y,z
)
\suchthat\text
{$
2x+z=3
$
and
$
y3z/2=1/2
$}}$
.
However, this second description is not optimal.
It has two equations instead of three
...
...
@@ 1707,7 +1708,7 @@ The second equation gives
$
y
=(
1
/
2
)(
3
/
2
)
z
$
and the first equation gives
$
x
=(
3
/
2
)(
1
/
2
)
z
$
.
Thus the solution set
can be described as
Thus the solution set
is this.
\begin{equation*}
\set
{
(x,y,z)=
((3/2)(1/2)z,(1/2)(3/2)z,z)
\suchthat
z
\in\Re
}
...
...
@@ 1819,7 +1820,7 @@ In the prior example
$
y
$
and~
$
w
$
are free because in the echelon form system they
do not lead while
they are parameters because of how
they are used in the solution set description
.
we use them to describe the set of solutions
.
Had we instead
rewritten the second equation as
$
w
=
2
/
3
(
1
/
3
)
z
$
then
the free variables would still be
$
y
$
and~
$
w
$
but the parameters
...
...
@@ 1854,7 +1855,7 @@ This is another system with infinitely many solutions.
The leading variables are
\(
x
\)
,
\(
y
\)
, and
\(
w
\)
.
The variable
\(
z
\)
is free.
Notice that, although there are infinitely many
solutions, the value of
the variable
$
w
$
is fixed
at
$

1
$
.
solutions, the value of
$
w
$
doesn't vary but is constant
at
$

1
$
.
To parametrize, write
\(
w
\)
in terms of
\(
z
\)
with
\(
w
=
1
+
0
z
\)
.
Then
\(
y
=(
1
/
4
)
z
\)
.
Substitute for
\(
y
\)
in the first
...
...
@@ 1894,7 +1895,7 @@ has $2$~rows and $3$~columns and so
is a
\(
\nbym
{
2
}{
3
}
\)
matrix.
(read that aloud as ``twobythree'');
the number of rows is always first.
Entries are named by
the corresponding lowercase letter
We denote entries with
the corresponding lowercase letter
so that
$
a
_{
i,j
}$
is the number in row~
$
i
$
and column~
$
j
$
of the array.
The entry in the second row and first column is
\(
a
_{
2
,
1
}
=
3
\)
.
...
...
@@ 1977,7 +1978,7 @@ We will write them vertically, in onecolumn matrices.
For instance, the top line says that
\(
x
=
2

2
z
+
2
w
\)
and the second line says that
\(
y
=

1
+
z

w
\)
.
The next section gives a geometric interpretation that will help us
picture the solution sets
when they are written in this way
.
picture the solution sets.
\begin{definition}
A
\definend
{
vector
}
\index
{
vector
}
...
...
@@ 2061,7 +2062,7 @@ matrix.\index{matrix!scalar multiplication}\index{scalar multiplication!matrix}
We write scalar multiplication in either order, as
\(
r
\cdot\vec
{
v
}
\)
or
\(
\vec
{
v
}
\cdot
r
\)
, or without the `
$
\cdot
$
' symbol:~
$
r
\vec
{
v
}$
.
(Do not refer to scalar multiplication
as `scalar product' because
that name is used
for a different operation.)
as `scalar product' because
we use that name
for a different operation.)
\begin{example}
\begin{equation*}
...
...
@@ 2127,7 +2128,7 @@ We write that in vector form.
Note how well vector notation sets off
the coefficients of each parameter.
For instance, the third row of the vector form shows plainly that if
\(
u
\)
is
held
fixed then
\(
z
\)
increases three times as fast as
\(
w
\)
.
fixed then
\(
z
\)
increases three times as fast as
\(
w
\)
.
Another thing shown plainly is that setting both
\(
w
\)
and
\(
u
\)
to zero
gives that this vector
\begin{equation*}
...
...
@@ 2200,7 +2201,7 @@ would tell us about the size of solution sets.
It will also help us understand the geometry of the solution
sets.
Many questions arise from our observation that
Gauss' method can be done
in
Many questions arise from our observation that
we can do Gauss' method
in
more than one way (for instance, when swapping rows we may have a choice of
more than one row).
\nearbytheorem
{
th:GaussMethod
}
says that we must get the same solution set
...
...
@@ 3203,7 +3204,7 @@ particular solution and so there are no sums of that form.
\begin{theorem}
\label
{
th:GenEqPartPlusHomo
}
Any linear system's
solution set
can be described as
solution set
has the form
\begin{equation*}
\set
{
\vec
{
p
}
+c
_
1
\vec
{
\beta
}_
1+
\,\cdots\,
+c
_
k
\vec
{
\beta
}_
k
\suchthat
c
_
1,
\,\ldots\,
,c
_
k
\in\Re
}
...
...
@@ 3273,8 +3274,8 @@ with the reduction of the associated homogeneous system.
\end{linsys}
\end{eqnarray*}
Obviously the two reductions go in the same way.
We can study how
linear systems are reduced
by instead studying how
t
he associated homogeneous systems are reduced
.
We can study how
to reduce a linear systems
by instead studying how
t
o reduce the associated homogeneous system
.
\end{example}
Studying the associated homogeneous system has a great advantage over
...
...
@@ 3434,8 +3435,7 @@ etc.
\begin{proof}
Apply Gauss' method to get to echelon form.
We may get some `
\(
0
=
0
\)
' equations that we can drop
from the system
We may get some `
\(
0
=
0
\)
' equations
(if the entire
system consists of such equations then the theorem is trivially true)
but
...
...
@@ 3511,12 +3511,13 @@ In particular, we need this in the case where
a homogeneous system has a unique solution.
Then the homogeneous case
fits the pattern of the other solution sets: in the proof above,
the solution set is derived
by taking the
\(
c
\)
's to be the free variables
we derive the solution set
by taking the
\(
c
\)
's to be the free variables
and if there is a unique solution then there are no free variables.
The proof incidentally shows,
as discussed after
\nearbyexample
{
ex:Parametrize1
}
, that solution sets can
always be parametrized using the free variables.
as discussed after
\nearbyexample
{
ex:Parametrize1
}
, that
we can always parametrize solution sets
using the free variables.
The next lemma finishes the proof of
\nearbytheorem
{
th:GenEqPartPlusHomo
}
by considering the particular solution part of the
...
...
@@ 3830,7 +3831,7 @@ has either no solutions or else has infinitely many, as with these.
The word singular means ``departing from general expectation''
(people often, naively, expect that systems
with the same number of variables as equations will have a unique solution).
Thus,
it can be thought of
as connoting
Thus,
we can think of it
as connoting
``troublesome,'' or at least ``not ideal.''
(That `singular' applies those systems that do not have one solution
is ironic, but it is the standard term.)
...
...
@@ 4182,8 +4183,8 @@ of Gauss' method itself in the rest of this chapter.
\end{exparts}
\end{answer}
\recommended
\item
\nearbylemma
{
th:GenEqPartHomo
}
says that any particular solution
may be used
for
$
\vec
{
p
}$
.
\nearbylemma
{
th:GenEqPartHomo
}
says that
we can use
any particular solution
for
$
\vec
{
p
}$
.
Find, if possible, a general solution to this system
\begin{equation*}
\begin{linsys}
{
4
}
...
...
@@ 4445,8 +4446,8 @@ of Gauss' method itself in the rest of this chapter.
\nearbylemma
{
le:HomoSltnSpanVecs
}
,
what happens if there are no non`
\(
0
=
0
\)
' equations?
\begin{answer}
In this case the solution set is all of
\(
\Re
^
n
\)
and
can be
express
ed
in the required form
In this case the solution set is all of
\(
\Re
^
n
\)
and
we can
express
it
in the required form
\begin{equation*}
\set
{
c
_
1
\colvec
[r]
{
1
\\
0
\\
\vdotswithin
{
1
}
\\
0
}
+c
_
2
\colvec
[r]
{
0
\\
1
\\
\vdotswithin
{
0
}
\\
0
}
...
...
@@ 4516,7 +4517,7 @@ of Gauss' method itself in the rest of this chapter.
Gauss' method will use only rationals (e.g.,
\(
(
m
/
n
)
\rho
_
i
+
\rho
_
j
\)
).
Thus
the solution set can be expressed
using only rational numbers as
Thus
we can express the solution set
using only rational numbers as
the components of each vector.
Now the particular solution is all rational.
...
...
@@ 4735,13 +4736,13 @@ we fix $x$ and $y$, we can solve for appropriate $m$, $n$, and $p$:
&
&
&
&
p
&
=
&
y
\hfill
\end{linsys}
\end{equation*}
shows that
that
any
shows that
we can express
any
\begin{equation*}
\vec
{
v
}
=
\colvec
[r]
{
1
\\
2
\\
0
}
x+
\colvec
[r]
{
1
\\
0
\\
1
}
y
\end{equation*}
can be expressed
as a member of
\(
R
\)
with
as a member of
\(
R
\)
with
\(
m
=
x
\)
,
\(
n
=
2
x
\)
, and
\(
p
=
y
\)
:
\begin{equation*}
\vec
{
v
}
=
...
...
@@ 4951,7 +4952,8 @@ is in \( R \) but not in \( P \).
=
\colvec
[r]
{
1
\\
8
}
+
\colvec
[r]
{
0
\\
1
}
(63t)
\end{equation*}
and so any vector in the form for
\(
S
_
1
\)
can be stated in the form
and so we can state any vector in the form for
\(
S
_
1
\)
can
also in the form
needed for inclusion in
\(
S
_
2
\)
.
For
\(
S
_
2
\subseteq
S
_
1
\)
, we look for
\(
t
\)
so that
...
...
@@ 5051,7 +5053,7 @@ is in \( R \) but not in \( P \).
=
\colvec
[r]
{
1
\\
3
\\
1
}
(2m)
+
\colvec
[r]
{
2
\\
1
\\
5
}
(m2n)
\end{equation*}
and so
any member of
\(
S
_
2
\)
can be expressed
in the form needed for
and so
we can express any member of
\(
S
_
2
\)
in the form needed for
\(
S
_
1
\)
.
\partsitem
These sets are equal.
...
...
gr2.tex
View file @
8018b48a
...
...
@@ 56,10 +56,10 @@ be parallel, or be the same line.
\end{center}
\end{minipage}
\end{center}
These pictures aren't a short way to prove
the
These pictures aren't a short way to prove
the results from the prior section, because those apply
to any number of linear equations and any number of unknowns.
But they
pictures do help us to
understand those results.
But they
do help us
understand those results.
This section develops the ideas that we need to
express our results geometrically.
In particular, while
...
...
@@ 129,7 +129,7 @@ despite that those displacements start in different places.
\includegraphics
{
ch1.10
}
\end{center}
Sometimes, to emphasize this property vectors have of not being anchored,
they are referred to
as
\definend
{
free
}
\index
{
vector!free
}
vectors.
we can refer to them
as
\definend
{
free
}
\index
{
vector!free
}
vectors.
Thus, these free vectors are equal
as each is a displacement of one over and two up.
\begin{center}
...
...
@@ 246,7 +246,7 @@ canonical representation ends at that point.
\Re
^
n=
\set
{
\colvec
{
v
_
1
\\
\vdotswithin
{
v
_
1
}
\\
v
_
n
}
\suchthat
v
_
1,
\ldots
,v
_
n
\in\Re
}
\end{equation*}
And,
addition and scalar multiplication are done
componentwise.
And,
we do addition and scalar multiplication
componentwise.
Having considered points, we now turn to the lines.
In
$
\Re
^
2
$
, the line through
\(
(
1
,
2
)
\)
and
\(
(
3
,
1
)
\)
...
...
@@ 279,7 +279,8 @@ the line through \( (1,2,1) \) and \( (2,3,2) \) is the set of
and lines in even higherdimensional spaces work in the same way.
In
$
\Re
^
3
$
,
a line uses one parameter so that there is freedom to move back and forth
a line uses one parameter so that a particle on that line
is free to move back and forth
in one dimension,
and a plane involves two parameters.
For example, the plane through the points
...
...
@@ 306,7 +307,7 @@ For example, the plane through the points
\colvec
[r]
{
1
\\
0
\\
5
}
\end{equation*}
are two vectors whose whole bodies lie in the plane).
As with the line, note that
some points in this plane are described
As with the line, note that
we describe some points in this plane
with negative
$
t
$
's or negative
$
s
$
's or both.
In algebra and calculus we often use a description of planes involving
...
...
@@ 314,14 +315,14 @@ a single equation
as the condition that describes
the relationship among the first, second, and third
coordinates of points in a plane.
\newsavebox
{
\jhscratchbox
}
\savebox
{
\jhscratchbox
}{
\includegraphics
{
ch1.18
}}
\newlength
{
\jhscratchlength
}
\newlength
{
\jhscratchheight
}
\settowidth
{
\jhscratchlength
}{
\usebox
{
\jhscratchbox
}}
\begin{
center
}
\usebox
{
\jhscratchbox
}
%\includegraphics{ch1.18
}
\end{
center
}
%
\newsavebox{\jhscratchbox}
%
\savebox{\jhscratchbox}{\includegraphics{ch1.18}}
%
\newlength{\jhscratchlength}\newlength{\jhscratchheight}
%
\settowidth{\jhscratchlength}{\usebox{\jhscratchbox}}
\begin{
equation*
}
%
\usebox{\jhscratchbox}
\vcenteredhbox
{
\includegraphics
{
ch1.18
}
}
\end{
equation*
}
% \begin{equation*}
% P=\set{\colvec{x \\ y \\ z}\suchthat 2x+3yz=4}
% \end{equation*}
...
...
@@ 329,12 +330,12 @@ The translation from such a description to the vector description that we
favor in this book is to
think of the condition as a oneequation linear system
and parametrize
\(
x
=
2

y
/
2

z
/
2
\)
.
\begin{equation*}
\vcenteredhbox
{
\includegraphics
{
ch1.19
}}
\end{equation*}
% \begin{center}
% \
includegraphics{ch1.19
}
% \
makebox[\jhscratchlength][l]{\includegraphics{ch1.20}
}
% \end{center}
\begin{center}
\makebox
[\jhscratchlength][l]
{
\includegraphics
{
ch1.20
}}
\end{center}
% \begin{equation*}
% P=\set{\colvec[r]{2 \\ 0 \\ 0}
% +y\cdot\colvec[r]{1/2 \\ 1 \\ 0}
...
...
@@ 540,7 +541,7 @@ namely by any particular solution.

\colvec
[r]
{
1
\\
0
\\
4
}
=
\colvec
[r]
{
3
\\
0
\\
7
}
\end{equation*}
that plane can be described
in this way.
we can describe that plane
in this way.
\begin{equation*}
\set
{
\colvec
[r]
{
1
\\
0
\\
4
}
+m
\colvec
[r]
{
1
\\
1
\\
2
}
...
...
@@ 754,7 +755,7 @@ namely by any particular solution.
The other equality is similar.
\end{answer}
\item
How should
$
\Re
^
0
$
be defined
?
How should
we define
$
\Re
^
0
$
?
\begin{answer}
We shall later define it to be a set with one element
\Dash
an
``origin''.
...
...
@@ 816,7 +817,8 @@ namely by any particular solution.
\subsectionoptional
{
Length and Angle Measures
}
We've translated the first section's results about solution sets into
geometric terms, to better understand those sets.
But we must be careful not to be misled by our own terms; labeling subsets of
But we must be careful not to be misled by our own terms
\Dash
labeling subsets of
\(
\Re
^
k
\)
of the forms
\(
\set
{
\vec
{
p
}
+
t
\vec
{
v
}
\suchthat
t
\in\Re
}
\)
and
\(
\set
{
\vec
{
p
}
+
t
\vec
{
v
}
+
s
\vec
{
w
}
\suchthat
t,s
\in\Re
}
\)
...
...
@@ 1060,8 +1062,8 @@ between two nonzero vectors \( \vec{u},\vec{v}\in\Re^n \) is
\arccos
(
\,\frac
{
\vec
{
u
}
\dotprod\vec
{
v
}}{
\norm
{
\vec
{
u
}
\,
}
\,\norm
{
\vec
{
v
}
\,
}
}
\,
)
\end{equation*}
(
the angle between the zero vector and any other vector is defined to be a
right
angle
).
(
by definition, the angle between the zero vector and any other vector is
right).
\end{definition}
\noindent
Thus vectors from
\(
\Re
^
n
\)
are
...
...
@@ 1078,7 +1080,7 @@ These vectors are orthogonal.
\qquad
$
\colvec
[
r
]
{
1
\\

1
}
\dotprod\colvec
[
r
]
{
1
\\
1
}
=
0
$
\end{center}
The arrows are shown
away from canonical position
We've drawn the arrows
away from canonical position
but nevertheless the vectors are orthogonal.
\end{example}
...
...
@@ 1181,11 +1183,11 @@ Not every vector in each is orthogonal to all vectors in the other.
\colvec
[r]
{
1
\\
3
\\
1
}
\end{equation*}
\begin{answer}
T
he set
We could describe t
he set
\begin{equation*}
\set
{
\colvec
{
x
\\
y
\\
z
}
\suchthat
1x+3y1z=0
}
\end{equation*}
can also be described
with parameters in this way.
with parameters in this way.
\begin{equation*}
\set
{
\colvec
[r]
{
3
\\
1
\\
0
}
y+
\colvec
[r]
{
1
\\
0
\\
1
}
z
\suchthat
y,z
\in\Re
}
...
...
@@ 1245,7 +1247,7 @@ Not every vector in each is orthogonal to all vectors in the other.
\partsitem
Associate?
\partsitem
How does it interact with scalar multiplication?
\end{exparts}
As always,
any assertion must be backed by
either a proof or an example.
As always,
you must back any assertion with
either a proof or an example.
\begin{answer}
Assume that
\(
\vec
{
u
}
,
\vec
{
v
}
,
\vec
{
w
}
\in\Re
^
n
\)
have components
\(
u
_
1
,
\ldots
,u
_
n,v
_
1
,
\ldots
,w
_
n
\)
.
...
...
@@ 1541,7 +1543,7 @@ Not every vector in each is orthogonal to all vectors in the other.
\item
Describe the angle between two vectors in
\(
\Re
^
1
\)
.
\begin{answer}
The angle between
\(
(
a
)
\)
and
\(
(
b
)
\)
is found
We can find the angle between
\(
(
a
)
\)
and
\(
(
b
)
\)
(for
\(
a,b
\neq
0
\)
) with
\begin{equation*}
\arccos
(
\frac
{
ab
}{
\sqrt
{
a
^
2
}
\sqrt
{
b
^
2
}}
).
...
...
@@ 1659,7 +1661,7 @@ Not every vector in each is orthogonal to all vectors in the other.
The
\(
\vec
{
z
}_
1
+
\vec
{
z
}_
2
=
\zero
\)
case is easy.
For the rest, by the definition of angle,
we will be
done
if we show this.
we will be
finished
if we show this.
\begin{equation*}
\frac
{
\vec
{
z
}_
1
\dotprod
(
\vec
{
z
}_
1+
\vec
{
z
}_
2)
}{
\norm
{
\vec
{
z
}_
1
}
\,\norm
{
\vec
{
z
}_
1+
\vec
{
z
}_
2
}
}
...
...
gr3.tex