The Spectral Theorem, part 1: Complex Version.

July 29, 2010

(Note) So, the general spectral theorem is pretty sweet, but (as Sheldon Axler does in Linear Algebra Done Right, the book that I’m essentially following in this blog) I’m going to split it up into two parts. In “real” math, I suppose we should consider two cases: when the field is algebraically closed and when it is not.  The algebraically closed case is going to be nearly identical to the complex case.  But because we don’t know “how” algebraically closed the other field is, I’m not entirely certain that the “not algebraically closed” case follows from the Reals case of the theorem.  For example, if we were to use the integers in place of the reals, we would most likely be able to produce examples which did not follow the Reals version of the spectral proof.  Either way…we will mostly be using this “in real life” in the case that the field is either the reals or the complexes.  Thus, I do not feel too bad for not proving this in its full generality.

So, let’s wonder something for a second: why have I been proving all these random things?  What the hell were we looking for again?

 

Little Review.

Oh, right, we wanted to have a space V be partitioned into a bunch of little one-dimensional invariant subspace things so that we have

T(V) = T(U_{1}) \oplus \cdots \oplus T(U_{n})

for some linear map T.  It’s cute, because it looks like what happens to the elements!  But, how the heck do we get one dimensional invariant subspaces again?  Oh, right, having U_{i} be some one-dimensional invariant subspace is the same thing as saying

(\exists w\in U_{i})(T(u) = \lambda w)(\forall u\in U_{i})

for some \lambda\in F, where F is the underlying field.  Or, in less math-y terms, this means that there’s some w\in U_{i} such that every time we apply T to some element, we get some multiple of w.  In other, slightly more sophisticated, words, “w generates T(U_{i})“.  But then, obviously, we have T(w) = \lambda w, for some \lambda\in F, and so, in fact, w is an eigenvector of T!  Okay, nice review, huh?

One New Lemma Before We Begin.

Okay, during the proof of the spectral theorem, I realized that we needed something that I didn’t actually prove.  It’s not difficult, so I’ll do it now.  It should really seem reasonable: it’s the fact that \|Tv\| = \|T^{\ast}v\| for all v\in V if and only if T is normal.  It kind of seems like a reasonable statement if we think about what normalcy really “means.”  But let’s not get all “deep” here, and let’s just prove it:

Lemma: T is normal if and only if we have \|Tv\| = \|T^{\ast}\| for all v\in V.

Proof. This is not really a hard proof, but it has a clever step.  We’re gonna prove this in one fell swoop, and this proof is directly out of Linear Algebra Done Right, simply because I can’t think of any nicer way to do it.

T is normal \Leftrightarrow T^{\ast}T = TT^{\ast}

\Leftrightarrow T^{\ast}T - TT^{\ast} = 0

\Leftrightarrow \langle (T^{\ast}T - TT^{\ast})v, v\rangle = 0 for all v\in V

\Leftrightarrow \langle T^{\ast}Tv, v\rangle = \langle TT^{\ast}v, v\rangle for all v\in V

\Leftrightarrow \|Tv\|^{2} = \|T^{\ast}v\|^{2} for all v\in V

\Leftrightarrow \|Tv\| = \|T^{\ast}v\| for all v\in V

which proves the theorem.  \Box.

Note that \Leftrightarrow means “if and only if” so if you read this proof forwards and backwards, you prove both ways.

 

The New Stuff.

Okay, so, now, if we want to totally decompose V into one-dimensional invariant subspaces, we need some things.  First, we’re going to say, as usual, V is nontrivial and finite-dimensional, and also that dim(V) = n.  So we want n eigenvectors (automatically linearly independent, as we proved before) to create a basis for V.  Now, this basis is nice, but you know what would be better?  If we scaled the eigenvectors such that they were all unit length.  Well, we’d need an inner product in order to do that, but they’d be SO MUCH NICER, no?  Then we’d have an orthonormal basis for V made entirely out of eigenvectors. Holy crap, that’d be awesome, right?  Yes, it would be.  But when does that crap ever happen?  What kind of space does V need to be?  What kind of map does T have to be?  Well, funny you should ask that…

 

Theorem (The Spectral Theorem for Complex Vector Spaces): Suppose that V is a nontrivial finite-dimensional complex inner product space (a space which has an inner product), and T:V\rightarrow V is a linear map.  Then V has an orthonormal basis consisting of eigenvectors of T if and only if T is normal.

 

Before we prove this, let’s remind ourselves what normal means.  A linear map T is normal if we have that TT^{\ast} = T^{\ast}T.  Also, we’re going to need one particular fact: if we have T, then there does exist an orthonormal basis \{e_{1}, \dots, e_{n}\} of V such that M(T) is upper triangular.  We will actually use this, and it’s not too difficult to prove.  If you think about it, it’s just a nice application of the gram-schmidt process using some nice original set of vectors.  It shouldn’t seem too unbelievable.

Proof. Okay.  Let’s prove the easy part first.  (\Rightarrow direction). Let’s suppose that V has an orthonormal basis that’s all eigenvectors of T.  Well, then, what’s M(T)?  It’s a diagonal matrix.  Then what’s M(T^{\ast})?  It’s also a diagonal matrix, which is just the conjugate of M(T).  Do these commute with each other?  Yes they do, as any two diagonal matrices of the same size commute with each other.  If you have not seen this proof, you should do it.  It’s kind’a kickin’.

Okay, now, (\Leftarrow direction).  Suppose that T is a normal linear map.  We have, by that note before this proof, that there’s some orthonormal basis \{e_{1}, \dots, e_{n}\} such that M(T) is upper triangular.  In other words, given this basis, we have

M(T) = \left(\begin{array}{ccc} a_{1,1} & \cdots & a_{1,n} \\ & \ddots & \vdots \\ 0 & & a_{n,n}\end{array}\right)

where that 0 in the corner means “the lower right-hand corner has all 0’s.”  Now, the point is to show that this matrix is “really” a diagonal matrix.  How are we going to do that?  Well, well, well.

Okay, so, note that T(e_{1}) = a_{1,1} which implies that \|T(e_{1})\|^{2} = |(a_{1,1})|^{2}.  Yes.  Now, here’s a clever little thing: what does M(T^{\ast}) look like?

M(T^{\ast}) = \left(\begin{array}{ccc} \overline{a_{1,1}} & & 0 \\ \vdots & \ddots & \\ \overline{a_{1,n}} & \cdots & \overline{a_{n,n}}\end{array}\right)

So note that T^{\ast}(e_{1}) = a_{1,1} + a_{1,2} + \dots + a_{1,n}.  This means, in particular,

\|T^{\ast}(e_{1})\| = |a_{1,1}|^{2} + |a_{1,2}|^{2} + \dots + |a_{1,n}|^{2}

Yeah?  BUT, WAIT A SECOND.  T is normal, and so by that lemma in the previous section, \|T(e_{1})\| = \|T^{\ast}(e_{1})\|!  Then, we have that

|a_{1,1}|^{2} = |a_{1,1}|^{2} + |a_{1,2}|^{2} + \dots + |a_{1,n}|^{2}

which means that everything besides a_{1,1} is 0! WHAT.  Yes.  Really.  Check it.  Getting rid of the |a_{1,1}|^{2}‘s from both sides leaves us with

|a_{1,2}|^{2} + \dots + |a_{1,n}|^{2} = 0

and because all of these values are non-negative, it follows that all of them are zero.

Now, let’s do this for the rest of the e_{i}‘s.  In general, we have

\|T(e_{i})\| = \|T^{\ast}(e_{i})\|

= |a_{i,i}|^{2} = |a_{i,i}|^{2} + |a_{i,i+1}|^{2} + \dots + |a_{i, n}|^{2}

which means that only a_{i,i} is potentially non-zero, and everything else is zero.  This means that our matrix

M(T) = \left(\begin{array}{ccc}a_{1,1}& &0 \\ & \ddots & \\ 0 & & a_{n,n}\end{array}\right)

which means that M(T) is diagonal. Note, then, that the non-zero diagonal entries are eigenvalues (as T(e_{i}) = a_{i,i}e_{i}) with the associated eigenvectors e_{i}!  This means that we actually have that our basis is an orthonomal basis made of eigenvectors.  Which is what we wanted!  \Diamond.

This tells us a lot.  If we have T is normal, then we can say a lot about the way that V can be decomposed.  In fact, if T is normal and V is complex, then we can actually have one of the nicest decompositions of V!

Next time, we will plow right on through to the reals version of the spectral theorem.  The proof (which I will essentially be paraphrasing from Axler, as usual.) is actually significantly different from the proof of the complex case, and it requires a few theorems which will not seem to be at all related until we actually do the proof.  The reason I’m doing the real version of the proof, and including all those little weird lemmas, is because it actually does tell us quite a bit about the character of real spaces which are the kind of spaces that I feel we’re most familiar with.

The secret, which I’ll tell you now, is that for real spaces we essentially replace “normal” with “self-adjoint” and the same conditions hold.  This is kind of nice, but because self-adjoint is a much stronger condition (self-adjoint actually implies normal.  why?) there are fewer maps that actually satisfy the real spectral theorem.  Sad.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: