Matrices? But I hate those! : An Upsetting Adventure in Using Matrices to Represent Linear Maps.

July 15, 2010

Okay.  I get it.  You’re sick and tired of matrices.  We all are.  You didn’t really like doing them in high school, it’s really tough to remember if one of them is 2\times 3 or 3\times 2.  But, you know what?  You’re gonna have to tough it out.  Because matrices really make everything that we’ve been doing with linear maps a whole hell of a lot easier.  So let’s start digging a hole so big that we’ll never be able to get out of it.

First, you should already know what a matrix is.  It’s just a formal object with some weird ways to multiply, add, and so forth.  There’s also this thing called the determinant, and there’s just about a billion ways to find it.  Here’s one that I’m pretty partial to

Alright, now, let’s talk about something completely different.  Let’s switch over to linear algebra.  Okay, say we have two finite dimensional vector spaces V and W.  Now, we obviously have a basis for each one of these, right?  Let’s just write them down.  \{v_{1}, \dots, v_{n}\} is the basis for V and \{w_{1}, \dots, w_{m}\} is the basis for W.  Notice that these spaces do not have to have the same dimension!  Now, let’s consider a linear map T:V\rightarrow W.  Any one will do, just think of your favorite one.

Let’s take some arbitrary element v\in V.  Okay, cool, we got v now, so what can we do?  Well, we can write v in terms of the basis for V as follows:

v = a_{1}v_{1} + \dots + a_{n}v_{n}

If we applied T to v, then what would happen? 

T(v) = a_{1}T(v_{1}) + \dots + a_{n}T(v_{n})

right?  So, really, if we want to know what happens to any element under T, it suffices to simply see where T takes each basis element!  This makes sense, right?  (If you want to build a barn, you can either paint the whole thing when it’s done or you can paint each individual plank of wood that makes it up before you build it.  Does this make sense?  If not, try to imagine your own reason.)  So we really just need to figure out the following thing: for each v_{i} what is T(v_{i})?  Well, this is easy.  T(v_{i}) is going to be in W and so we can write it as a linear combination of the basis for W.  For example, we have T(v_{i}) = a_{1,i}w_{1} + a_{2,i}w_{2} + \dots + a_{m,i}w_{m}.  The coefficients are confusingly named because, secretly, I’m going to make them elements of a matrix soon.

So, let’s just sum this up for a second.  We’re gonna have the following things by doing the same thing as we just did.

T(v_{1}) = a_{1,1}w_{1} + a_{2,1}w_{2} + \dots + a_{m,1}w_{m}

T(v_{2}) = a_{1,2}w_{1} + a_{2,2}w_{2} + \dots + a_{m,2}w_{m}

\vdots

T(v_{n}) = a_{1,n}w_{1} + a_{2,n}w_{2} + \dots + a_{m,n}w_{m}

See the pattern here?  Yeah.  Okay, so, what we’re gonna do is, we’re going to make a matrix out of this information.  We’re going to make the rows correspond to the basis elements of W and the columns are going to correspond to how we write v_{i}.  Yeah, just look at it:

\left( \begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \cdots & a_{m,n}\\ \end{array} \right)

So, as we note, if we look at, say, the first column, we get all of the coefficients to the linear combination that makes up T(v_{1}) in W.  Make sure you know what the hell is going on here.  I usually write "T(v_{i})" above the i-th column to remind myself that this is the expansion of T(v_{i}), and sometimes I write w_{i} next to the i-th row to remind myself that the coefficient in that row is the coefficient of w_{i} in the linear combination.  We call this matrix the matrix of T with respect to the bases of V and W.  But because this name is pretty damn long, we just call it M(T).  You’re suppost’a write it {\mathcal M}(T), I think, but I really can’t be bothered to make the M that fancy.  Notice, though, that this matrix depends heavily on the bases we picked for both V and W.  What would have happened if we picked a different basis?

 

Examples, Please!

An example will be much more useful than any of this formal defining, so let’s do just that.  Let’s take {\mathbb R}^2 as our first space with the standard basis \{(0,1), (1,0)\} and let’s take {\mathbb R}^3 as our second space with the standard basis \{(1,0,0),(0,1,0),(0,0,1)\}.  Now, let’s define the linear map.  Let’s let T(x,y) = (y, x, 0).  You can check yourself that this is linear. 

So, let’s see.   T((0,1)) = (1,0,0) which is simply the first basis element.  T((1,0)) = (0,1,0) which is simply the second element.  Therefore, M(T) is equal to

\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right)

Now, do you see what happened?  We have the first column says that T((0,1)) is equal to 1((1,0,0)) + 0((0,1,0)) + 0((0,0,1)).  We can essentially "read off" where the basis elements will go under a linear transform by looking at this matrix. 

Let’s do one more example.  Let’s take {\mathbb R}^2 as our first space and as our second space.  Let’s make them different though: let’s let the first one have the standard basis \{(0,1), (1,0)\} and let’s let the second one have the slightly different basis \{(1,1), (0,-1)\}.  First, check that this is, in fact, a basis.  I’m not gonna do it, but it seems reasonable, right?  If we need to write, say, (4,1) we can just take 4((1,1)) = (4,4) and then add 3((0,-1)) to get the desired element.  Okay, so, remember those bases. 

Okay, now, we’re going to have our linear map be T((x,y)) = (2y, 2x).  So, let’s figure out where the basis elements go: we have T((1,0)) = (0,2) = 0((1,1)) + (-2)(0,-1) and then T((0,1)) = (2,0) = 2((1,1)) + 2((0,-1)).  So, the associated matrix will be

\left( \begin{array}{cc} 0 & 2 \\ -2 & 2 \\ \end{array} \right)

make sure you see where this comes from! 

 

Next Time…

Because I kind of want you to practice this (and because I’m pretty lazy) I’m going to write about upper triangular and diagonal matrices next time.  Essentially, the idea is this: what happens if there is an invariant subspace of some map from V to V?  If the invariant subspace is generated by some basis elements v_{1}, \dots, v_{m} then in our associated matrix, for every column that represents one of those v_{i}, there will be zeros everywhere except, perhaps, for the rows corresponding to one of the v_{i}‘s, since, of course, v_{i} can be written as a linear combination of only the basis elements that we’ve just described (since the subspace is invariant).  Any time we have zeros in a matrix, it’s much nicer, so we’re going to try to partition our space into a bunch of invariant subspaces so that we can put lots of zeros in our associated matrix. 

As a special treat, if we have enough zeros (a lot of them!) then we can even figure out the eigenvalues for the map!  Talk about more bang for your buck!  But this has to wait until next time.  Suspense.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: