(Note) So, the general spectral theorem is pretty sweet, but (as Sheldon Axler does in Linear Algebra Done Right, the book that I’m essentially following in this blog) I’m going to split it up into two parts. In “real” math, I suppose we should consider two cases: when the field is algebraically closed and when it is not.  The algebraically closed case is going to be nearly identical to the complex case.  But because we don’t know “how” algebraically closed the other field is, I’m not entirely certain that the “not algebraically closed” case follows from the Reals case of the theorem.  For example, if we were to use the integers in place of the reals, we would most likely be able to produce examples which did not follow the Reals version of the spectral proof.  Either way…we will mostly be using this “in real life” in the case that the field is either the reals or the complexes.  Thus, I do not feel too bad for not proving this in its full generality.

So, let’s wonder something for a second: why have I been proving all these random things?  What the hell were we looking for again?

Read the rest of this entry »

Advertisements

This part is going to be really exciting, no lies.  In fact, we’re really just going to prove one or two theorems and that’ll be that.  The reason for doing so is because one of these theorems is so elegant and beautiful that I want you to focus on it.  Specifically, the fact that the matrix associated to an adjoint linear map is the conjugate transpose of the matrix associated to the original linear map.  Best.

Read the rest of this entry »

My next post is going to be this nice proof of the fact that if we have M(T) for some linear map T: V\rightarrow V for V is a nontrivial finite vector space, then M(T^{\ast}) is really easy to find: it’s simply the conjugate transpose of M(T).  This is exactly what it sounds like: we find the transpose of M(T) (by switching a_{i,j} with a_{j,i} for all i and j) and then replacing every element in the matrix with its conjugate.  I guess the notation should be (M(\overline{T}))^{t} = M(T^{\ast}).  Which is pretty complicated looking, but it’s really not hard to do at all.  In fact, the point of this post is to give a detailed example.  Actually, let’s do two examples, and you’re going to find that half of the time this is much easier to do than what I’ve said above.

Read the rest of this entry »

Now, for this post, we’re going to assume something kind of unique: namely, we’re going to assume you know how to work inner products.  Yes, alas, I’m going to make a leap of faith here — but, reader, do not let me down!  I’ve posted a pdf explaining what they are a few posts ago, and we’ll be going over some basic properties of them as I go on in this post.  But the best way to learn these is to really do a bunch of problems that deal with them.  Almost every linear algebra book that I’ve seen has a huge section on these.

On the other hand, nearly every (basic) linear algebra book that I’ve seen has either a passing mention of adjoints or no mention at all.  This is more of an advanced topic, but it really doesn’t need to be — it’s not difficult, it’s just not all that intuitive.

But let’s stop talking about it and let’s start doing it.

Read the rest of this entry »

Short answer here: yes, inner products and linear maps can be friends, but we have to be careful about it.  Let’s assume for this post that V is a nontrivial finite vector space of dimension n.  Now, let’s say that F is a field of scalars (such as the reals or the complexes) and let’s consider a linear map T:V\rightarrow F.  We call such a map a linear functional; that is, if its domain is a vector space and its codomain is the underlying field of scalars.  This really isn’t as scary as it sounds.

Read the rest of this entry »

At some point, I mentioned that if V is a complex vector space and T is a linear map such that T:V\rightarrow V, then T has an eigenvalue.  We also noted that if V is an odd dimensional real space, then T has an eigenvalue.  Stochastic matrices, which are in probability theory and a whole bunch of other places, have this same sort of nice property: if M(T) happens to be stochastic, then there exists an eigenvalue!

Read the rest of this entry »

So, I wanted to proceed onwards towards some pretty cool mathematics (and, finally, get through basic linear algebra) by introducing the Gram-Schmidt ortho-normalization process and some really sweet consequences (that actually really surprised me!), but it occurred to me that I’d need to introduce norms and inner products as well as prove a butt-load of things about them.

Because I am terribly lazy, I am not going to do this.  Instead, I read through a number of inner product introductions (which are all basically the same) to find one that was well-written.  The one that I’ve picked to show ya’ll is from G. Keady, from the University of Birmingham.  It is in pdf form, and it is available here (warning, pdf!).

With the possible exception of ultra-brevity (orthonormal is abbreviated ON, and that’s kind of weird to get used to) and some of the things at the end of the paper, this is a 3-page introduction and, partially because of this, is very readable.  You should not need any math besides what we’ve already covered in this blog.

We’ll give examples of normed vector spaces and inner product spaces later, but we’ll definitely be using the inner product space of continuous (real) polynomials on the interval [0,1], which has the inner product

\displaystyle\langle f, g\rangle = \int_{0}^{1} f(x)g(x)dx

This will come in handy later, so remember it!

What is a diagonal matrix?  It’s one that looks like this:

\left( \begin{array}{cccc} a_{1} & 0 & \dots & 0 \\ 0 & a_{2} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & a_{n} \end{array}\right)

There are a number of good reasons to like diagonal matrices.  They look kind of neat and they have some sweet properties.  So what are some of these properties?

Read the rest of this entry »

Let’s just quickly go over what we did in the last post: we talked about, given some finite vector spaces V and W with bases given by \{v_{1}, \dots, v_{n}\} and \{w_{1}, \dots, w_{m}\} and some linear map T:V\rightarrow W, we can write down a matrix called M(T) (though, there really should be some mention of the bases used in this notation…I’m just very lazy and assume that our bases are given beforehand.) which looks a lot like this

\left( \begin{array}{cccc} a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ a_{2,1} & a_{2,2} & \dots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m,1} & a_{m,2} & \dots & a_{m,n} \end{array} \right)

where those weird coefficients are given by writing our basis elements of V out in terms of the basis elements in W.  In other words, they’re from the equations:

T(v_{1}) = a_{1,1}w_{1} + a_{2,1}w_{2} + \dots + a_{m,1}w_{m}

T(v_{2}) = a_{1,2}w_{1} + a_{2,2}w_{2} + \dots + a_{m,2}w_{m}

\vdots

T(v_{n}) = a_{1,n}w_{1} + a_{2,n}w_{2} + \dots + a_{m,n}w_{m}

Okay, remember that?  Good.  So, that’s where we get our matrix M(T) from.  Alright.

Read the rest of this entry »

Okay.  I get it.  You’re sick and tired of matrices.  We all are.  You didn’t really like doing them in high school, it’s really tough to remember if one of them is 2\times 3 or 3\times 2.  But, you know what?  You’re gonna have to tough it out.  Because matrices really make everything that we’ve been doing with linear maps a whole hell of a lot easier.  So let’s start digging a hole so big that we’ll never be able to get out of it.

Read the rest of this entry »