Another Look at the Rank-Nullity Theorem: Linear Maps.

June 9, 2010

 

The last legit theorem we went over was the rank-nullity theorem.  This time, we’re going to discuss a couple of cute theorems dealing with the rank-nullity theorem and its applications to vector spaces.  For this post’s sake, all vector spaces will be finite.

Bases!

First, let’s state a ridiculously important theorem:

 

Thm: If V is a vector space, then it has a basis with finitely many elements.

 

The proof of this theorem uses the idea that we can keep making linearly independent bases that span more and more of the space.  Eventually, this process will stop, and we’ll have a basis for the vector space.  Speaking of which…

 

Thm: If V is a vector space and the set \{v_1, v_2, \dots, v_m\} is a spanning set (that is, every element in V is a linear combination of some element in the set) then this set can be reduced to a basis for V.

 

and

 

Thm: If V is a vector space and the set \{v_1, v_2, \dots, v_m\} is linearly independent in V, then it can be extended to a basis for V.

 

and

 

Thm: If V is a vector space, then any spanning set of V has a greater cardinality than any set of linearly independent vectors in V.

 

These theorems are not hard to prove, and some of the best proofs I’ve seen for them can be found in Linear Algebra Done Right by Sheldon Axler.  This, coincidentally, is the book that I’ve been going through to re-learn linear algebra.  The last theorem is especially important: it should seem reasonable that spanning sets are intuitively “large” since they must span the entire vector space, and that linearly independent sets are “small” since each of their elements has to be somehow “independent” from the others — this theorem tells us what we intuitively feel is true: that EVERY spanning set has more elements than ANY linearly independent set.  Now, we have a lot about bases and things like that: specifically, we can mess around with sets until we’re sure we have a basis for our vector space.

Before we can state one particularly awesome result, we need to have some notation for the “largeness of a set.”  So, I will use the notation |C| to mean “the number of elements in C.”

 

Thm: If V is a vector space, then every basis has the same number of elements.

Proof: Suppose that there are two bases for V, call them B = \{v_1, v_2, \dots, v_n\} and C = \{w_1, w_2, \dots, w_m\}.  Since B is a spanning set and C is linearly independent, we must have that |C| \leq |B|.  On the other hand, since C is a spanning set and B is linearly independent, we must have that |B| \leq |C|.  Putting this together, we note that |B| = |C|, which completes the proof.

 

So, this is a pretty nice theorem, no lies.  And it allows us to give some general basis for any vector space V since, up to some potential replacing and permuting, every basis is essentially the same.  A little later, we will make this precise by noting that there is always a way to easily “change the basis” of a vector space.

At this point, we define the dimension of a vector space, and say that if B is a basis for the vector space V, then we define \dim(V) = |B|; in other words, we define the dimension of V to be the size of the basis set.  Since every basis set is the same size, this is non-ambiguous.

 

Linear Maps!

Some people might argue that vector spaces alone are kind of bland: after all, once you have a basis, that’s about it.  You can do whatever you want inside of it, and everything will be kosher.  Where’s the adventure in that?  Are we satisfied to just sit at home in our nice little vector spaces or do we want to take our ships and travel to new and exotic vector spaces?  Linear maps tell us how to translate things from one vector space (like our dear, sweet home) to another vector space (like a cave of wonders!).  Spefically, we have the following:

A linear map T: V\rightarrow W for some vector spaces V and W is a map such that the following hold for a, b are scalars and v, w are vectors: T(av + bw) = aT(v) + bT(w).  Specifically, we have additivity (that T(v + w) = T(v) + T(w)) and homogeneity (that T(av) = aT(V)) which is pretty nice, all things considered.

There are a number of good examples of linear maps.  For example, the identity map I:V\rightarrow V is a nice example, since I(av + bw) = av + bw = aI(v) + bI(w).

Differentiation is another nice linear map: if we have some polynomial p then let differentiation be defined by D(p) = p' which will stand for the standard derivative.  This is clearly a linear map, since D(ap) = aD(p) = ap' and, if g is another polynomial, then D(p + g) = D(p) + D(g).

A last linear map to consider is something like “multiplication by x.”  Let T_{x} be a map such that T_{x}(p) = xp for some polynomial p.  Let’s check that this is a linear map: given some other polynomial g, we have that

T_{x}(ap + bg) = (ap + bg)x = apx + bgx

= a(px) + b(gx)=a(T_{x}(p))+b(T_{x}(g))

and so this is a linear map.  Pretty sweet, right?

Now, sometimes when we map things to other things, things become zero.  Think about our differentiation map: whenever we take the derivative of a constant, we get zero.  Upsetting, but true.  We give a special name to this kind of thing: let the nullity of the map T be defined as the set of vectors \{v|T(v) = 0\}.  In other words, the nullity is the set of all vectors that go to zero when we apply the linear map.  You might be concerned because we’ve already used this word before, when we were talking about matrices: well, surprise, it turns out the be the same kind of deal as this since we can represent linear maps as matrices.  More about this later, though.

Let’s call the rank of a linear map the set of all elements that are given as a “range” of our linear map.  Specifically, the rank is equal to \{T(v)|v\in V\}.  Notice that 0\in \mbox{Rank}(T), and, in fact, I will leave it to the reader to prove that if the linear map T:V\rightarrow W, then \mbox{Rank}(T) is a subspace of W and \mbox{Nullity}(T) is a subspace of V.  These two facts are not hard to prove, but definitely should be attempted by the reader!  It’s enlightening.

Now, we’ve already stated the Rank-Nullity theorem but in a slightly different context.  We now state and prove it for linear maps.  In particular, check out the proof: it is pretty slick!

 

Thm (Rank-Nullity): Given T:V\rightarrow W is a linear map and V and W are vector spaces, then we have that \dim(\mbox{Rank}(T)) + \dim(\mbox{Nullity}(T)) = \dim(V).  In other words, the dimension of the rank plus the dimension of the nullity of a map equals the dimension of the “domain” vector space.

Pf: Let \dim(V) = n.  Now, the nullity is a subspace of V, so let \{v_1, v_2, \dots, v_m\} be a basis for the nullity.  Since the nullity is a subspace of V, its basis must be linearly independent in V.  We can extend this linearly independent set into a basis for V, and so let the extended basis for V be \{v_1, v_2, \dots, v_m, w_{1}, \dots, w_{n-m}\}.

If we were to show that \{T(w_1), \dots, T(w_{n-m})\} is a basis for Rank(T), then this would prove the theorem (as this shows the rank has dimension n-m, the nullity has dimension m, and their sum is the dimension of V), so let’s do just that.  First, let’s show that this is a spanning set.

If v\in V, then we have that v = a_{1}v_{1} + \dots + a_{n-m}w_{n-m} by our basis for V.  Now, consider

v = a_{1}v_{1} + \dots + a_{n-m}w_{n-m}

T(v) = T(a_{1}v_{1} + \dots + a_{n-m}w_{n-m})

T(v) = a_{1}T(v_{1}) + \dots + a_{n-m}T(w_{n-m})

and since the first bunch of vectors are in the nullity, we have that these all go to 0.  So we are left with

T(v) = a_{m+1}T(w_{1}) + \dots + a_{n-m}T(w_{n-m})

and since v was arbitrary, we see that for any vector v\in V, T(v) can be written in that manner.  Therefore, it must be the case that \{T(w_1), \dots, T(w_{n-m})\} span Rank(T).  But what if these aren’t linearly independent?  Suppose, for a contradiction, that these are not linearly independent basis vectors.  Then, by an equivalent phrasing of linear independence, we have that for some set of scalars c_i not all of which are 0, the following implications hold:

c_{1}T(w_{1}) + \dots c_{n-m}T(w_{n-m}) = 0

T(c_{1}w_{1}) + \dots T(c_{n-m}w_{n-m}) = 0

T(c_{1}w_{1} + \dots c_{n-m}w_{n-m}) = 0

which means that c_{1}w_{1} + \dots c_{n-m}w_{n-m} is in the nullity of T.  Since \{v_1, v_2, \dots, v_n\} is a basis for the nullity, we have that, for some suitable coefficients, that

a_{1}v_{1} + a_{2}v_{2} + \dots + a_{n}v_{n} = c_{1}w_{1} + \dots + c_{n-m}w_{n-m}

and with some suitable rearranging, we have the equivalent looking

a_{1}v_{1} = -(a_{2}v_{2} + \dots + a_{n}v_{n}) + c_{1}w_{1} + \dots + c_{n-m}w_{n-m}

but since \{v_1, v_2, \dots, v_m, w_{1},\dots, w_{n-m}\} is a basis for V it is linearly independent and so the above cannot hold!  This is a contradiction!  \Rightarrow\Leftarrow.  This (finally) proves the theorem.  \Box

 

In the next post, we’ll look at some applications of this theorem and I’ll shell out some problems for the reader to try.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: