## Invariant Subspaces and Eigenvalues: BFF’s.

### July 14, 2010

So, the last post was a bit long, but we needed it to learn things.  So what did we learn?  Well, if we have some finite space $V$, some linear map $T$, and distinct eigenvectors $\{v_{1}, v_{2}, \dots, v_{m}\}$ with respect to $T$, then we can find associated one-dimensional invariant subspaces (ones which map only to elements inside themselves under $T$) $U_{1}, U_{2}, \dots, U_{m}$ such that we can write $V$ as $V = U_{1} \oplus U_{2} \oplus \dots \oplus U_{m} \oplus W$ with $W$ being the subspace that we constructed last time by extending the linearly independent set of eigenvectors to a basis for $V$.

This time, we’re gonna play around with this idea a little bit.  We’re going to start with a pretty simple theorem.  Try to prove this yourself!

Theorem: Let $V$ have dimension $n$.  Then every linear map $T$ has at most $n$ distinct eigenvectors.

Proof. Welp, suppose not.  Then suppose we have $m > n$ distinct eigenvectors.  Note that these are linearly independent by one of the theorems we had in the last post, and so we can extend this to a basis for $V$.  But then $V$ has more than $n$ basis elements but is dimension $n$.  Weird.  $\Rightarrow\Leftarrow$$\Box$.

So, this tells us that we have a nice amount of eigenvectors.  In fact, it tells us that the number of eigenvectors is at least bounded, which is, in fact, not so obvious if you just think about the definition.  Okay, so maps can have only a few eigenvectors, but do they even have to have any?  Can there be maps with no eigenvectors?  Yeah, of course.  Let’s take $T:{\mathbb R}^{2}\rightarrow {\mathbb R}^{2}$ defined by $T(x,y) = (y,0)$.

What would we need to have an eigenvalue for this?  $T(x,y) = (y,0) = \lambda(x,y) = (\lambda x, \lambda y)$.  (Let’s note here that I do not count $\lambda = 0$ as an eigenvalue since it is trivially an eigenvalue of every map.)  This means that $\lambda y = 0$ which means (since $\lambda \neq 0$) that $y = 0$.  Therefore, $\lambda x = 0$ and so $x = 0$.  The only eigenvalue is trivial, and we don’t count that one.

Well, okay, that sucks.  But maybe we can we ever guarantee that there will be an eigenvector for a particular map in a particular space.  Can we?  Sort’a kind’a!

Theorem: If $V$ is a nontrivial finite dimensional complex space, then $T:V\rightarrow V$ must have a (nontrivial) eigenvector.

Proof. (Notice, first, for the sophisticated mathematician, that this proof uses nothing more than a kind of algebraic closed-ness that ${\mathbb C}$ has.  Therefore, this theorem is not only good for a complex vector space, but any one which is algebraically closed.  All we need is to be able to factor polynomials down into linear factors.)

Okay, now, the proof.  Well, since $V$ is finite let it have dimension $n$.  Now, given some nontrivial element $v\in V$, certainly the set $\{v, Tv, T^{2}v, \dots, T^{n}v\}$ is not linearly independent in $V$, since there are $n+1$ elements.  Therefore, we can have some linear combination of these terms that adds up to 0 without all the coefficients being 0.  So let’s do that.

$a_{n}T^{n}v + a_{n-1}T^{n-1}v + \dots + a_{1}v = 0$

and since $v\neq 0$, we can factor that out

$(a_{n}T^{n} + a_{n-1}T^{n-1} + \dots + a_{1}I)v = 0$

where $I$ is the identity matrix.  But, now, here’s the clever part: since $v\neq 0$ we have $(a_{n}T^{n} + a_{n-1}T^{n-1} + \dots + a_{1}I) = 0$.  Because we’re working in the complex numbers, every complex polynomial can be factored into linear terms (algebraic closed-ness of ${\mathbb C}$) and so we now write

$(a_{n}T^{n} + a_{n-1}T^{n-1} + \dots + a_{1}I)v \\ = (b_{1}T - c_{1}I) \cdots (b_{n}T - c_{n}I) = 0$

and so, we must have, for some $i$, that $b_{i}T - c_{i}I = 0$ with $b_{i} \neq 0$ (why?).  What’s this mean?  That $T = \frac{c_{i}}{b_{i}}I$.  This means that $Tv = \frac{c_{i}}{b_{i}}Iv = \frac{c_{i}}{b_{i}}v$ which means $v$ is an eigenvector of $T$.  Kind’a neat.  $\Box$.

Alright, so, we have that going on.  Every linear map on a finite but nontrivial complex space has at least one eigenvalue.  What about real spaces?  Anything about those things?

Theorem: If $V$ is a finite dimensional nontrivial real space of odd dimension, and if $T: V\rightarrow V$ is a linear map, then $T$ has at least one eigenvalue.

Proof. Notice that the part that’s important is the odd dimensional part.  I can’t stress this enough.  Why is this?  It really follows more or less the same proof as above, but we know that every real polynomial can be decomposed into linear terms and quadratics (that is, every real zero corresponds to a linear term, and every complex pair of zeros corresponds to a quadratic term).  Therefore, we make the same polynomial as above, using the exact same process for some $v\neq 0$:

$(a_{n}T^{n} + a_{n-1}T^{n-1} + \dots + a_{1}I)v = 0$

but instead of being able to decompose this into linear terms, we have to decompose it into linear and quadratic terms.  But if this is odd dimensional, no matter how many quadratic terms we have, there must be at least one linear term (why?  It’s extremely important that you know why this is.  If you can’t get it, just think about 7 magnets: you can either pair them together in a set of 2, or keep them by themselves.  Is there any way to pair all of them in sets of 1 or 2 such that there are no sets of 1?  No way.  Same idea here.)  So we must have a linear term, which gives us the same conclusion as above.  There must be an eigenvalue.  $\Box$.

Now, what if we tried this for an even dimensional space?  Would this work?  No.  It only follows that some quadratic factor of that big polynomial in the previous proof is equal to 0, and this is not enough to give us an eigenvalue.  Upsetting, but true.

This is enough for now.  Some things to consider:

• Are there even dimensional real spaces with linear maps from and to themselves that have an eigenvector?
• Does every complex space of dimension $n$ with a linear map going from and to it have exactly $n$ distinct eigenvectors?
• What happens if we let an eigenvalue $\lambda = 0$?  What are the associated eigenvectors (in some arbitrary finite space, associated to some linear map from and to that arbitrary finite space)?
• What can we say about a space of dimension $n$ that has exactly $n$ distinct eigenvectors?

This last question is particularly motivating.  This would allow us to decompose spaces into little tiny pieces, which is pretty nice.  Next time we will go into what upper triangular matrices have to do with this kind of thing.  We’re going to end up at the idea of a diagonal matrix, which will tell us a whole hell-of-a-lot about the eigenvalues and invariant subspaces of a space with respect to a linear map.  Pret much, we’re going to be able to bypass a lot of this difficult work when we figure out how to use diagonal matrices in the proper way!  Neat.