Matrices? But I hate those! : All Matrices Are Good, but Diagonal Matrices are More Good.
July 16, 2010
Let’s just quickly go over what we did in the last post: we talked about, given some finite vector spaces and with bases given by and and some linear map , we can write down a matrix called (though, there really should be some mention of the bases used in this notation…I’m just very lazy and assume that our bases are given beforehand.) which looks a lot like this
where those weird coefficients are given by writing our basis elements of out in terms of the basis elements in . In other words, they’re from the equations:
Okay, remember that? Good. So, that’s where we get our matrix from. Alright.
Matrices with lots of Zeros.
From now on, we’re going to consider maps which are much nicer than the ones we’ve been considering; specifically, a lot of information can come from sending one vector space to another, but a number of good things can come if we send a vector space to itself. In other words, we will be considering a finite vector space and linear maps .
(Note: The book that I’m using as a reference for these linear algebra posts, Linear Algebra Done Right by Axler, uses some notation for maps where the domain and co-domain are the same. Because I find this potentially confusing, I stick to the notation .)
In addition to that, our bases for both of the ‘s will be the same. Therefore, given that our basis for is , we will be writing equations of the form
An astute reader will note that, when we transform this into , the matrix will be a square matrix — one which has the same number of rows and columns. This type of matrix is ridiculously nice, which is why we like to play around with maps that have the same domain and co-domain.
Now, what if we partitioned into some invariant subspaces? Say, for example, we have where the basis for is and the basis for is . Well, then, what can we say? The linear combination for will contain only the first basis elements, since the subspace is invariant! For example:
a similarly, for the latter basis elements, they are expressible without the use of the first basis elements! For example:
what does this look like when we translate it into the matrix ? Well, it gives us a boat-load of 0’s. For the general matrix
we can reduce this, in our example, to just
which, as you’ll notice, has a lot of zeros! This makes things a lot nicer, since it’s easier to do everything if your matrix has a lot of zeros. For those of you who are matrix-savvy, notice that this matrix is really just a block matrix and we may write it as
which makes looking at it and taking the determinant significantly easier.
Let’s take this a bit farther, and see if we can’t tease stuff out of this. Yes, that’s a lot of zeros, but I wish there were more zeros. Well, how can we do this?
Oh, I know. What if we partitioned into a whole bunch of one-dimensional invariant subspaces? Suppose where each of the ‘s are one dimensional and generated by the basis element . So, what do we know about this now? How can we write ? Easy:
and, in general, is just and every other term has a coefficient of 0. So for , the only things we need to fill in are:
Just look at how many zeros we have now! It’s astounding, really.
(Note: I had planned to talk about upper triangular matrices, but I feel that, in an informal lecture like this one, there is no real need to bring them up. Ideally, we’d like to strive for matrices which are diagonal, so I wanted to show exactly why we wanted that to be the case. It’s true that UT matrices have nice properties as well, which also tell us nice things about the particulars of the linear map, but I feel it would take far too much time and not be nearly as useful as just going straight to diagonals.)
Okay, I’m going to cut this post short, even though we didn’t do all that much, since the next post will be properties of diagonal matrices in general. We will follow this by postin’ about properties that we can derive from the general properties which will help us in linear algebra; specifically, we’re going to talk about why diagonal matrices and eigenvalues are bffs.