## The Spectral Theorem, part 2: The Real Part!

### August 1, 2010

Okay, so, last time we talked about the spectral theorem for complex vector spaces. What did it say? Do you remember? Don’t look. Fine, look. Either way, it said that we have an orthonormal basis made of eigenvectors of some linear map if and only if is normal. Now, being normal is not that big of’a deal. It just means that . Not a biggie, right? Yeah.

But complex spaces are much nicer than real spaces. For one thing, the equation has a solution in but no solution in , and this is sort of a depressing foreshadow to the kinds of problems that can come along and haunt our maps.

We’re going to have to have a more stringent restriction on our real space linear maps because real space isn’t as “liquid” as complex space — there just isn’t as much room for things to move around! Does this make sense? No? Doesn’t have to. The point is, for this version of the spectral theorem, we’re going to modify our conditions slightly: instead of our map being *normal*, we’re going to make the much more stringent assumption that be *self-adjoint*. Remember what this means? . Yeah, that’s a *huge* difference. In fact, a map being self-adjoint automatically makes it normal: if is self-adjoint, then . Yeah, so, this is kind of a big deal.

Okay, so, we’re gonna do this in steps.

## Step One: Prove Some Weird Lemmas.

Confusingly, we’re going to prove two lemmas before we do anything else, whose use won’t become apparent until we use it to prove our main lemma. I’m adapting this proof mainly from Axler, since I’m following his structure for proving the spectral theorems.

**Lemma**: Suppose that is a real nontrivial finite dimensional vector space and is self-adjoint. Then if are such that , then has an inverse.

Proof.Suppose that are such that . So, then, let’s let be nonzero, and let’s do this string of equalities.

And now we notice that by Cauchy-Schwartz (if you don’t get why, look back at the inequality), and so we have

At this point, this is a quadratic equation, so we will complete the square. Remember how to do that? Look it up if you don’t! That’s algebra II, buster.

so, what did that do? Well. Virtually nothing, I guess. WAIT, NO. Since each of these are squared, we have that these two parts are either zero or positive. Since , then we have and so the second term is positive. This means that this entire string

ha. So, in particular, we have (putting all this together)

which is a nice thing to have. In particular, this means that

and so we must have that is invertible.

(

NOTE:What this actually means is that is injective, and since is finite dimensional, it is also surjective, and hence bijective. This implies that there is an inverse.) .

Okay, so, that did…well, that did a little bit. Okay, so, I guess we know something kind of random about quadratics now. Neat. Maybe this will make more sense with this next lemma.

** **

**Lemma**: If then for if and only if .

Proof.We first note that we can factorize this by completing the square. We have the factorizationAs we noted before, if , then the right side is positive for every that we consider in the reals, and so it cannot possibly be 0. Therefore, it has no real roots.

Now, conversely, suppose that . Then, we define which is always nonnegative due to the inequality, and this gives us

since this is just the difference of squares now. But, this actually gives us a factored form where the roots are real and equal to . .

## Step Two: Show that Every Self-Adjoint Map has an Eigenvalue.

Okay, so, we need to show this for the first part of our spectral proof. If it doesn’t have at least one eigenvalue, there’s really no hope of having an orthonormal basis made of eigenvectors, now is there?

**Lemma**: Suppose that is a nontrivial finite-dimensional real space, and is a linear map. If is self-adjoint, then has an eigenvalue.

Proof.We’re going to do the same sort of thing we did when we proved an odd-dimensional real space had to have an eigenvalue. Also, let . Now let’s consider some with , and then consider the elementsNow, how many vectors are there? , so they’re definitely linearly dependent. Upsetting. This means that there exists some real numbers such that

which is nice, because, you know, this is a polynomial. Yeah. So we can do lots of things to this polynomial; in particular, we can factor the hell out of it! Because it’s real, we have that the real zeros correspond to a linear factor, and the imaginary zeros are paired with conjugate pairs in a quadratic factor. In other words, we have

which is a much nicer factored form. This is, if the notation above is confusing, it’s quadratic factors multiplied with linear factors. Also note that obviously, and we have that for each quadratic form, , otherwise we’d decompose them into real factors (as one of the previous lemmas states).

Now, because is self-adjoint, and because of that previous inequality, we have that is invertible, and, therefore, we can sort of just invert it. We’re left with

and now, you might, at this point, be saying, “well, wait, what if we have no linear factors?” which is exactly what I was saying. But, suppose we don’t. Then what’re we left with? We’re simply left with . But we supposed , so it follows that we must have at least one linear factor. Creepy, right? This is a good example of math behaving weird and unexpectedly mid-proof! Anyhow, the above equality implies that at least one of the

which implies, in particular, that

which implies, again, that is an eigenvalue of . Which is nice, because this is exactly what we wanted. .

## Step Three: Finally Proving the Real Spectral Theorem.

Okay, now, using the last few lemmas, we’re finally going to prove this real spectral theorem that you’d been hearing so much about. Okay, let’s just jump right into it. We’ll give some good examples later.

**Theorem** (The Real Spectral Theorem): Suppose that is a nontrivial finite-dimensional real inner-product space, and suppose that . Then has an orthonormal basis consisting of eigenvectors of if and only if is self-adjoint.

Proof.We’ll do the easy part first (). Suppose it has an orthonormal basis consisting of eigenvectors of . Well, because this is a real space, we have that is a diagonal matrix with all real entries. So that’s ? It’s the same exact thing. In other words, . This means that is self-adjoint.Now let’s do the () direction. Let’s suppose that is self-adjoint. I’m going to follow Alxer, as usual, here, but just note that there are several good ways to do this proof. I may prove this geometrically in an upcoming post, but, for now, let’s do it by induction on the number of dimensions.

Clearly, if , we have that this holds; it has an eigenvalue by the previous lemma. So this must be its orthonormal basis once we normalize it.

Suppose, now, that the theorem is true for all real nontrivial finite-dimensional vector spaces with .

Let’s let be any eigenvalue of (since is self-adjoint we know that there’s at least one from one of those previous lemmas). Let’s let denote the eigenvector associated to this eigenvalue and such that . Now, this is the weird part: let’s let . In other words, this is just all scalar multiplies of ; it’s a one-dimensional subspace of . Note that is in (all the elements perpendicular to ) if and only if . This is sort of by definition.

Now, suppose that . Then, we note that, since is self-adjoint, that

since they’re perpendicular. Okay, this means that (why?). This means that, in general, when . But what’s this mean?! It means that is invariant under . So, let’s just define a linear map by just restricting to the subspace . Yeah, this is weird, but trust me. Because now, if , then

which means that is self-adjoint too! Are you surprised? You really shouldn’t be. But whatever, pal.

By our induction hypothesis, we have that our subspace has an orthonormal basis consisting of eigenvectors of . Yes. This gives us a huge piece of the puzzle now. So how can we extend it to the whole of ? Just adjoint our original eigenvector from before! The one that we used to define , remember? These are also all eigenvectors of (why?) and so now we have an orthonormal basis of consisting of eigenvectors of . This is what we wanted! YES. Good. Done. .

Yes. This is it. Now we can use and abuse the real spectral theorem. And, in fact, we will do such a thing in the next post. We will give an example of how to apply this theorem! Yes, we will *apply *this to *real *things. I know, I’m terrified too.