So Close and Yet So Far: What you can do if you know ALMOST every Eigenvalue.

November 30, 2010

Linear Algebra is strange.  On the surface, we have a ton of tricks that we can apply to things to make calculations nicer (diagonalizing matrices, finding orthonormal bases,…) but deeper down a lot of things connect to one-another in really unexpected ways — to me, anyway!

Here’s the problem.  We have an $n\times n$ matrix called $M$, and we have $n-1$ eigenvalues.  We have ALMOST every eigenvalue, but we’re just missing one.  What can we do about this?

There’s a few possible ways.  For example, the less creative way is:

Multiply the values $(x - \lambda_{i})$ together for each eigenvalue $\lambda_{i}$ and find the characteristic polynomial for the matrix — now divide.

But this way requires us to take a messy determinant, and that’s kind of annoying.  What can we learn from taking just the regular determinant?  That is, instead of computing $\det(M - \lambda I)$, why don’t we just compute $\det(M)$?  Ultimately, this is not much easier (in particular, on my computer, the difference in calculating this is less than a few seconds for a 100×100 matrix) but we already have a ton of information at our disposal: namely, the other eigenvalues.  Let’s recall two important things:

The trace of a matrix is the sum of the eigenvalues.

The determinant of a matrix is the product of the eigenvalues.

You should be able to prove these with little effort — and, if not, they’re in every linear algebra text book.  Recall that the trace is the sum of the diagonal elements.

So, what can we do with these?  Let’s note the following.

If we have that $\{\lambda_{1}, \dots, \lambda_{n-1}\}$ are eigenvalues for $M$, then we have that our final eigenvalue is equal to

$\lambda_{n} = \displaystyle Trace(A) - \sum_{i=1}^{n-1}\lambda_{i}.$

Yes.  That’s it.  You just add up the trace, and you subtract.  This is significantly faster than trying to take the determinant again, but, recall, we need a lot of information about this beforehand — we actually needed all buy one eigenvalue initially.

In fact, in addition to this trick, if we have two eigenvalues missing, we can use the determinant and the trace together to solve for the last eigenvalue in almost the same way.  We’ll do an example of this below.

Examples!

Example 1.

Let’s say we have the following matrix, $A$, which is equal to

$\left(\begin{array}{cc} 0 & 2 \\ 1 & 2\end{array}\right)$

We’re gonna solve this two different ways.  First, suppose some jerk comes up from behind you in a coffee place, sees the matrix, and tells you, “Oh, yeah, one of the eigenvalues is $1 + \sqrt{3}$.”  You begrudgingly thank him, but you wonder how to find the other one.  Well, it’s easy.

$Trace(A) = 2$

and thus,

$\lambda_{2} = 2 - (1 + \sqrt{3}) = 1 - \sqrt{3}$

which is the solution.

Let’s do this one a different way now.  Suppose no one talks to you at the coffee shop because you’re kind of awkward, so you don’t know either of the eigenvalues.  Yes, you could do that characteristic polynomial thing, but because we only need to know two eigenvalues we can use the following trick:

$\det(A) = 0 - 2 = -2 = \lambda_{1}\lambda_{2}$

$Trace(A) = 2 = \lambda_{1} + \lambda_{2}$

We can make this into a nice quadratic now by noting that

$\lambda_{2} = 2 - \lambda_{1}$

and so replacing this into the first equation, we get

$-2 = \lambda_{1}(2 - \lambda_{1}) = 2\lambda_{1} - \lambda_{1}^{2}$

which we rewrite as

$\lambda_{1}^{2} - 2\lambda_{1} - 2 = 0$

note that we’ve multiplied by $-1$ to make things look nice.  Using the quadratic formula, you’ll find the eigenvalues that we found above.

Note that this is not necessarily any easier than finding the characteristic polynomial — just another way to do the same problem!

Example 2.

Alright, that last one wasn’t too hard, even using the characteristic polynomial.  So, let’s try this one on for size:

$\left(\begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1\end{array}\right)$

We can do it the normal way, which (in this case) is quite easy since this is upper triangular.  Alternatively, if we know one of the eigenvalues is $\lambda_{1} = 1$ (which we can find simply from the definition of eigenvalue; it kind of “pops out at you” in this case.  Matrices with eigenvalues of 1 often have this property where it’s relatively obvious just by multiplying an arbitrary vector by $A$ to see what you get.) we can use our alternate method.

The determinant here is just 1, and the trace here is 3.  Hence,

$\lambda_{1} + \lambda_{2} + \lambda_{3} = 1 + \lambda_{2} + \lambda_{3} = 3$

hence

$\lambda_{2} + \lambda_{3} = 2$

and

$\lambda_{2}\lambda_{3} = 1$

Well, what does this tell us?  Rearranging the first equation, we get

$\lambda_{3} = 2 - \lambda_{2}$

and plugging this in give us

$\lambda_{2}(2 - \lambda_{2}) = 1$

$2\lambda_{2} - \lambda_{2}^{2} = 1$

which is just

$\lambda_{2}^{2} - 2\lambda_{2} + 1 = 0$

which, as any high school student knows, is

$(\lambda_{2} - 1)(\lambda_{2} - 1) = 0$

implying our other two eigenvalues are also 1.  This particular example was easy, but in the case where the eigenvalues are more complicated (or even complex!) this method comes in handy.  Note, in fact, in this case, it would have been much easier to calculate the characteristic polynomial — this is not always the case, especially if the matrix is not upper triangular.

Example 3.

One more example.  This time, it’s not gonna be upper triangular.  Let’s let $A$ be the following matrix:

$\left(\begin{array}{ccc} 1 & 1 & 1 \\ 2 & 2 & 1 \\ 1 & 0 & 1\end{array}\right)$

Let’s be exceedingly clever here: just multiplying by an arbitrary vector, we find that $\lambda =1$ is an eigenvalue.  (I usually multiply it out, and then see if either $\lambda = 1, -1$ work first, or if there’s any other obvious values from the equations we obtain.)

Now what?  Well, it would be annoying to do the characteristic polynomial.  So, let’s do the trace.

$Trace(A) = 4$

and so we need

$\lambda_{2} + \lambda_{3} = 3$

or, in other words

$\lambda_{3} = 3 - \lambda_{2}$.

Computing the determinant is kind of annoying here, but let’s expand by minors on the last row (you can do it anyway you want, though):

$\det(A) = 1(1 - 2) + 0 + 1(2 - 2) = -1$

Well, that wasn’t so bad.  Now, we have

$\lambda_{2}\lambda_{3} = \lambda_{2}(3 - \lambda_{2}) = -1$

and this is just

$\lambda_{2}^{2} - 3\lambda_{2} - 1 = 0.$

Using a fancy calculator, or by the quadratic formula, we get that the roots of this are

$\frac{1}{2}(3 - \sqrt{13})$

$\frac{1}{2}(3 + \sqrt{13})$

which are the eigenvalues $\lambda_{2}$ and $\lambda_{3}$ as required!  See, that wasn’t so bad?

To be completely honest, I only use this method when I have a 3×3 matrix and I’m having a hard time factoring the cubic.  I just keep trying numbers until I get something nice, and usually this is -1 or 1, and I then either divide out to get a quadratic, or if the determinant is easy to take (lots of zeros on some row) I’ll just quickly compute that and the trace.  It amounts to exactly the same thing, really, it’s just if you’d rather use synthetic division or compute a determinant.  Different strokes.

(Edit: I have been noticing, lately, that I also use this method to check to see if I’ve got the eigenvalues correct.  On a midterm exam for linear algebra, for example, we needed to find the eigenvalues for some matrix, and I accidentally factored a quadratic wrong — am embarrassing mistake! — by putting $(\lambda + 2)$ instead of $(\lambda - 2)$.  Since I knew the first two eigenvalues had to be $\lambda = 0,1$ (I’d found the eigenspaces for these) I realized that since the trace was equal to 3, I must have the last eigenvalue equal to 2.  Sure enough, I found my sign error, but if I didn’t know this trace trick it may not have been so easy to realize that I was just making a sign error instead of an error in some more irritating and substantial way.)