## Application of the Real Spectral Theorem.

### August 7, 2010

I mean, okay, we wrote about the real spectral theorem, but how much do we *really* know about it? A lot, I hope! I hope you remember that we needed to be self-adjoint in order to imply that we have an orthonormal basis of eigenvectors with respect to !

So, let’s apply this. Here’s the first question. Let’s let be our vector space with the standard basis. Let’s define by

Okay, that’s nice. What’s the associated matrix?

and, by that conjugate-transpose theorem, we have

which implies, instantly, that . In other words, is self-adjoint. This is basically all we needed to say that there’s an orthonormal basis of eigenvectors.

Note that we’ve already proven distinct eigenvectors are linearly independent, so we just need to find three of them. Let’s find the eigenvalues first. We’ll use the characteristic polynomial version of this:

by expansion by minors, and

so we need to solve this. Ugh. Well, let’s see. Does work? Yes! It does. Let’s factor that out and get rid of it. Now we have

which is easy to factor: the other eigenvalues are 5 and -1.

Good. Now, let’s find the forms of the eigenvectors. So, what one should we start with? Let’s do . Then we have

which means that our eigenvector must be such that , and so, we can consider the eigenvector , for example. Next, let’s look at the eigenvalue -1. We get

which implies that and that . We can, therefore, consider the eigenvector . Last, we look at the eigenvalue, which gives us

which tells us, first, that , and that . So let’s take the eigenvector .

Notice that these eigenvectors were chosen nearly at random from the ones that would work, given our restrictions. Are these linearly independent? Well, from the theorem, yes, but let’s just prove this.

Suppose that

which implies, immediately, that . So now, we have

(Why is this? Manipulate the equations.) which implies that , and so both must be 0. Thus, these three vectors are linearly independent. Therefore

is a basis for this particular vector space. And they’re all eigenvectors. But are they orthonormal? Well, yes. Remember? No? If a map is self-adjoint, then their eigenvectors are orthogonal. Here’s a refresher:

**Theorem**: Given is a nontrivial finite dimensional inner-product space and is self-adjoint, then if are eigenvectors corresponding to distinct eigenvalues, then they are orthogonal.

Proof.Suppose and are two distinct eigenvectors from the list above. Then, we haveand, if is non-zero, this implies that which is a contradiction. Therefore, , and this implies that they are orthogonal.

This means we only need to scale them up and we’ll have an orthonormal basis! Cool. So, let’s just scale them by their norm and we get

which makes this entire basis orthonormal. You can make it prettier, but this is essentially it. We said we’d make an orthonormal basis made of eigenvectors of , and we’ve done just that. Congraduations!

Now, as an extra exercise, think about what this means. What does it mean to have an orthogonal basis of eigenvectors of a map? Think about the vector space before we apply and then after we apply . What happens to the basis vectors above? Hm.