Matrices? But I hate those! : All Matrices Are Good, but Diagonal Matrices are More Good.

July 16, 2010

Let’s just quickly go over what we did in the last post: we talked about, given some finite vector spaces V and W with bases given by \{v_{1}, \dots, v_{n}\} and \{w_{1}, \dots, w_{m}\} and some linear map T:V\rightarrow W, we can write down a matrix called M(T) (though, there really should be some mention of the bases used in this notation…I’m just very lazy and assume that our bases are given beforehand.) which looks a lot like this

\left( \begin{array}{cccc} a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ a_{2,1} & a_{2,2} & \dots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \dots & a_{m,n} \end{array} \right)

where those weird coefficients are given by writing our basis elements of V out in terms of the basis elements in W.  In other words, they’re from the equations:

T(v_{1}) = a_{1,1}w_{1} + a_{2,1}w_{2} + \dots + a_{m,1}w_{m}

T(v_{2}) = a_{1,2}w_{1} + a_{2,2}w_{2} + \dots + a_{m,2}w_{m}

\vdots

T(v_{n}) = a_{1,n}w_{1} + a_{2,n}w_{2} + \dots + a_{m,n}w_{m}

Okay, remember that?  Good.  So, that’s where we get our matrix M(T) from.  Alright.

Matrices with lots of Zeros.

From now on, we’re going to consider maps which are much nicer than the ones we’ve been considering; specifically, a lot of information can come from sending one vector space to another, but a number of good things can come if we send a vector space to itself.  In other words, we will be considering a finite vector space V and linear maps T:V\rightarrow V

(Note: The book that I’m using as a reference for these linear algebra posts, Linear Algebra Done Right by Axler, uses some notation for maps where the domain and co-domain are the same.  Because I find this potentially confusing, I stick to the notation T:V\rightarrow V.)

In addition to that, our bases for both of the V‘s will be the same.  Therefore, given that our basis for V is \{v_{1}, \dots, v_{n}\}, we will be writing equations of the form

T(v_{1}) = a_{1,1}v_{1} + a_{2,1}v_{2} + \dots + a_{n,1}v_{n}

T(v_{2}) = a_{1,2}v_{1} + a_{2,2}v_{2} + \dots + a_{n,2}v_{n}

\vdots

T(v_{n}) = a_{1,n}v_{1} + a_{2,n}v_{2} + \dots + a_{n,n}v_{n}

An astute reader will note that, when we transform this into M(T), the matrix will be a square matrix — one which has the same number of rows and columns.  This type of matrix is ridiculously nice, which is why we like to play around with maps that have the same domain and co-domain.

Now, what if we partitioned V into some invariant subspaces?  Say, for example, we have V = U_{1} \oplus U_{2} where the basis for U_{1} is \{v_{1}, \dots, v_{k}\} and the basis for U_{2} is v_{k+1}, \dots, v_{n}.  Well, then, what can we say?  The linear combination for T(v_{1}) will contain only the first k basis elements, since the subspace is invariant!  For example:

T(v_{1}) = a_{1,1}v_{1} + a_{2,1}v_{2} + \dots + a_{k,1}v_{k} \\ + 0(v_{k+1}) + 0(v_{k+2}) + \dots + 0(v_{n})

a similarly, for the latter n - k basis elements, they are expressible without the use of the first k basis elements!  For example:

T(v_{k+1}) = 0(v_{1}) + 0(v_{2}) + \dots + 0(v_{k}) \\ + a_{k+1,k+1}v_{k+1} + a_{k+2, k+1}v_{k+2} + \dots + a_{n,k+1}v_{n}

what does this look like when we translate it into the matrix M(T)?  Well, it gives us a boat-load of 0’s.  For the general matrix

\left( \begin{array}{ccccccc} a_{1,1} & a_{1,2} & \dots & a_{1,k} & a_{1,k+1} &\dots & a_{1,n} \\ a_{2,1} & a_{2,2} & \dots & a_{2,k} & a_{2,k+1} &\dots & a_{2,n} \\ \vdots &\vdots &\ddots & \vdots & \vdots & \ddots & \vdots \\ a_{k,1} & a_{k,2} & \dots & a_{k,k} & a_{k,k+1} &\dots & a_{k,n} \\ a_{k+1,1} & a_{k+1,2} & \dots & a_{k+1,k} & a_{k+1,k+1} & \dots & a_{k+1,n} \\ \vdots &\vdots &\ddots & \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \dots & a_{n,k} & a_{n,k+1} &\dots & a_{n,n} \end{array} \right)

we can reduce this, in our example, to just

\left( \begin{array}{ccccccc} a_{1,1} & a_{1,2} & \dots & a_{1,k} & 0 &\dots & 0 \\ a_{2,1} & a_{2,2} & \dots & a_{2,k} & 0 &\dots & 0 \\ \vdots &\vdots &\ddots & \vdots & \vdots & \ddots & \vdots \\ a_{k,1} & a_{k,2} & \dots & a_{k,k} & 0 &\dots & 0 \\ 0 & 0 & \dots & 0 & a_{k+1,k+1} &\dots & a_{k+1,n}\\ \vdots &\vdots &\ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & 0 & a_{n,k+1} &\dots & a_{n,n} \end{array} \right)

which, as you’ll notice, has a lot of zeros!  This makes things a lot nicer, since it’s easier to do everything if your matrix has a lot of zeros.  For those of you who are matrix-savvy, notice that this matrix is really just a block matrix and we may write it as

\left( \begin{array}{cc} A & 0 \\ 0 & B \end{array} \right)

which makes looking at it and taking the determinant significantly easier.

Let’s take this a bit farther, and see if we can’t tease stuff out of this.  Yes, that’s a lot of zeros, but I wish there were more zeros.  Well, how can we do this? 

Oh, I know.  What if we partitioned V into a whole bunch of one-dimensional invariant subspaces?  Suppose V = U_{1} \oplus \dots \oplus U_{n} where each of the U_{i}‘s are one dimensional and generated by the basis element v_{i}.  So, what do we know about this now?  How can we write T(v_{1})?  Easy:

T(v_{1}) = a_{1,1}v_{1} + 0(v_{2}) + \dots + 0(v_{n})

and, in general, T(v_{i}) is just a_{i,i}v_{i} and every other term has a coefficient of 0.  So for M(T), the only things we need to fill in are:

\left( \begin{array}{cccc} a_{1,1} & 0 & \dots & 0 \\ 0 & a_{2,2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & a_{n,n} \end{array} \right)

Just look at how many zeros we have now!  It’s astounding, really. 

(Note: I had planned to talk about upper triangular matrices, but I feel that, in an informal lecture like this one, there is no real need to bring them up.  Ideally, we’d like to strive for matrices which are diagonal, so I wanted to show exactly why we wanted that to be the case.  It’s true that UT matrices have nice properties as well, which also tell us nice things about the particulars of the linear map, but I feel it would take far too much time and not be nearly as useful as just going straight to diagonals.)

Okay, I’m going to cut this post short, even though we didn’t do all that much, since the next post will be properties of diagonal matrices in general.  We will follow this by postin’ about properties that we can derive from the general properties which will help us in linear algebra; specifically, we’re going to talk about why diagonal matrices and eigenvalues are bffs.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: