Why Diagonal Matrices are Better Than Love.

July 16, 2010

What is a diagonal matrix?  It’s one that looks like this:

\left( \begin{array}{cccc} a_{1} & 0 & \dots & 0 \\ 0 & a_{2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & a_{n} \end{array}\right)

There are a number of good reasons to like diagonal matrices.  They look kind of neat and they have some sweet properties.  So what are some of these properties?

Wanna take the determinant?  Just multiply the elements of the diagonal.

Theorem: Given a diagonal matrix M like the one above, we have that the determinant det(M) = a_{1}a_{2}a_{3}\cdots a_{n}.

Proof. How do you find a determinant?  By minors.  Do it.  It’s gonna really blow your socks off.  It’s amazing.  Seriously.  Do it.  \Box.

Isn’t that enough for you to start loving diagonal matrices?  No?  Well, let’s talk about something else, then.  What if you want to take the inverse of your matrix?  Do you need to go through all of that crap where you find the adjoint and then transpose it and then multiply it by the reciprocal of the determinant?  Hell no!

Theorem:  Given a diagonal matrix M like the one above, we have that the inverse will be

\left( \begin{array}{cccc} a_{1}^{-1} & 0 & \dots & 0 \\ 0 & a_{2}^{-1} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & a_{n}^{-1} \end{array}\right)

so, in other words, we just need to make a new matrix with the inverses of the elements in the diagonals.

Proof. Two ways to prove this.  Either you “take my word for it” and just multiply the two matrices out to see that this works, or you can do the long way.  Either way, it works.  \Box.

Note that this theorem above implies:

CorollaryDet(M^{-1}) = (Det(M))^{-1}.

Proof. As above, Det(M) = a_{1}a_{2}\cdots a_{n}, and we have

Det(M^{-1}) = a_{1}^{-1}a_{2}^{-1}\cdots a_{n}^{-1} \\ = (a_{1}a_{2}\cdots a_{n})^{-1} = (Det(M))^{-1}\Diamond.

Corollary: A diagonal matrix M is invertible if and only if each element on the diagonal is non-zero.

Proof. Seriously, you can do this one yourself.  One way: note that the above proof will not work since 0 has no inverse.  Another way: note that the determinant would be 0 and use that “zero determinant implies no inverse” theorem.  \Box.

Okay, now, wanna hear something else cool?  This is along the lines of all that linear algebra we’ve been talkin’ about.  Suppose we have a finite vector space V and a linear map T:V\rightarrow V such that M(T), the matrix corresponding to this map, is diagonal.  Wanna know what the eigenvalues are?  Just look at the diagonal!

Theorem:  Let V be a finite dimensional vector space and T:V\rightarrow V is a linear map such that M(T) is diagonal and given by

\left( \begin{array}{cccc} \lambda_{1} & 0 & \dots & 0 \\ 0 & \lambda_{2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \lambda_{n} \end{array}\right)

then the eigenvalues of T are exactly \{\lambda_{1}, \dots, \lambda_{n}\}.

Proof. Now, this is actually sort’a sweet, but at first you might not believe it.  Because, I mean, it seems almost too easy, right?  So, okay, what does this actually mean though.  Let’s take one particular column.  M(T) states that T(v_{i}) = \lambda_{i}v_{i} which makes \lambda_{i} an eigenvalue for each i with corresponding eigenvector v_{i}.  Neat.  \Box.

This kind of thing is why we love diagonal matrices.  We can instantly find the eigenvalues, we can easily invert it, and we can find the determinant just by multiplying a few things together.  That’s why you should \heartsuit diagonal matrices.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: