## Application of the Real Spectral Theorem.

### August 7, 2010

I mean, okay, we wrote about the real spectral theorem, but how much do we really know about it?  A lot, I hope!  I hope you remember that we needed $T$ to be self-adjoint in order to imply that we have an orthonormal basis of eigenvectors with respect to $T$!

So, let’s apply this.  Here’s the first question.  Let’s let ${\mathbb R}^{3}$ be our vector space with the standard basis.  Let’s define $T:{\mathbb R}^{3}\rightarrow {\mathbb R}^{3}$ by

$T((x,y,z)) = (2x + z, y, x + 2z)$

Okay, that’s nice.  What’s the associated matrix?

$M(T) = \left(\begin{array}{ccc} 2 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 2\end{array}\right)$

and, by that conjugate-transpose theorem, we have

$M(T^{\ast}) = \left(\begin{array}{ccc} 2 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 2\end{array}\right)$

which implies, instantly, that $M(T) = M(T^{\ast})$.  In other words, $T$ is self-adjoint.  This is basically all we needed to say that there’s an orthonormal basis of eigenvectors.

Note that we’ve already proven distinct eigenvectors are linearly independent, so we just need to find three of them.  Let’s find the eigenvalues first.  We’ll use the characteristic polynomial version of this:

$det(M(T) - I\lambda) = \left| \begin{array}{ccc} 2-\lambda & 0 & 1 \\ 0 & 1-\lambda & 0 \\ 1 & 0 & 2-\lambda\end{array}\right|$

$= (2-\lambda)(2 - 3\lambda + \lambda^{2}) + 1(1-\lambda)$

by expansion by minors, and

$= 5 - 9\lambda + 5\lambda^{2} - \lambda^{3} = 0$

so we need to solve this.  Ugh.  Well, let’s see.  Does $\lambda = 1$ work?  Yes!  It does.  Let’s factor that out and get rid of it.  Now we have

$x^{2} - 4x - 5 = 0$

which is easy to factor: the other eigenvalues are 5 and -1.

Good.  Now, let’s find the forms of the eigenvectors.  So, what one should we start with?  Let’s do $\lambda = 1$.  Then we have

$T((x,y,z)) = (2x + z, y, x + 2z) = (x,y,z)$

which means that our eigenvector must be such that $x = -z$, and so, we can consider the eigenvector $(1,1,-1)$, for example.  Next, let’s look at the eigenvalue -1.  We get

$T((x,y,z)) = (2x + z, y, x+2z) = (-x, -y, -z)$

which implies that $y = 0$ and that $-3x = z$.  We can, therefore, consider the eigenvector $(1,0,-3)$.  Last, we look at the $\lambda = 5$ eigenvalue, which gives us

$T((x,y,z)) = (2x + z, y, x + 2z) = (5x, 5y, 5z)$

which tells us, first, that $y = 0$, and that $z = 3x$.  So let’s take the eigenvector $(1,0,3)$.

Notice that these eigenvectors were chosen nearly at random from the ones that would work, given our restrictions.  Are these linearly independent?  Well, from the theorem, yes, but let’s just prove this.

Suppose that

$(0,0,0) = a(1,1,-1) + b(1,0,-3) + c(1,0,3)$

$= (a,a,-a) + (b,0,-3b) + (c,0,3c)$

$= (a + b + c, a, -a -3b +3c)$

which implies, immediately, that $a = 0$.  So now, we have

$b + c = 0$

$b - c = 0$

(Why is this?  Manipulate the equations.) which implies that $b = -c$, and so both must be 0.  Thus, these three vectors are linearly independent.  Therefore

$\{(1,1,-1), (1,0,-3), (1,0,3)\}$

is a basis for this particular vector space.  And they’re all eigenvectors.  But are they orthonormal?  Well, yes.  Remember?  No?  If a map is self-adjoint, then their eigenvectors are orthogonal.  Here’s a refresher:

Theorem: Given $V$ is a nontrivial finite dimensional inner-product space and $T:V\rightarrow V$ is self-adjoint, then if $\{e_{1}, \dots, e_{n}\}$ are eigenvectors corresponding to distinct eigenvalues, then they are orthogonal.

Proof. Suppose $e_{i}$ and $e_{k}$ are two distinct eigenvectors from the list above.  Then, we have

$\langle T(e_{i}), e_{k}\rangle = \langle \lambda_{i} e_{i}, e_{k}\rangle = \lambda_{i} \langle e_{i} ,e_{k}\rangle$

$= \langle e_{i}, T^{\ast}(e_{k}) \rangle = \langle e_{i}, T(e_{k})\rangle$

$= \langle e_{i}, \lambda_{k}e_{k}\rangle = \lambda_{k}\langle e_{i}, e_{k}\rangle$

and, if $\langle e_{i}, e_{k}\rangle$ is non-zero, this implies that $\lambda_{k} = \lambda_{i}$ which is a contradiction.  Therefore, $\langle e_{i}, e_{k}\rangle = 0$, and this implies that they are orthogonal.  $\Box$

This means we only need to scale them up and we’ll have an orthonormal basis!  Cool.  So, let’s just scale them by their norm and we get

$\displaystyle \{\frac{(1,1,-1)}{\sqrt{3}}, \frac{(1,0,-3)}{\sqrt{10}}, \frac{(1,0,3)}{\sqrt{10}}\}$

which makes this entire basis orthonormal.  You can make it prettier, but this is essentially it.  We said we’d make an orthonormal basis made of eigenvectors of $T$, and we’ve done just that.  Congraduations!

Now, as an extra exercise, think about what this means.  What does it mean to have an orthogonal basis of eigenvectors of a map?  Think about the vector space before we apply $T$ and then after we apply $T$.  What happens to the basis vectors above?  Hm.