Linear Maps and Inner Products: Will They Ever Get Along?
July 24, 2010
Short answer here: yes, inner products and linear maps can be friends, but we have to be careful about it. Let’s assume for this post that is a nontrivial finite vector space of dimension . Now, let’s say that is a field of scalars (such as the reals or the complexes) and let’s consider a linear map . We call such a map a linear functional; that is, if its domain is a vector space and its codomain is the underlying field of scalars. This really isn’t as scary as it sounds.
For the heck of it, though, let’s do an example. Let’s say that . Notice that we have the domain as some vector space (namely the vector space ) and we’re mapping into our field of scalars (the reals). What could this map do? Well, how about something like: , giving the first coordinate of some vector as a scalar. Yeah, but, okay, that’s kind of boring. So what about this one: let’s fix an element in and call it ; now let’s define . In other words, this map takes the inner product of whatever we put in with some fixed vector in the space.
Now, think about that last example. Read it a bunch. Let’s try it once: let’s use all the stuff in the last paragraph, and the map will be that weird inner product thing. Let’s actually make our and have the standard inner product. Then what does do? Well, for a general element we have that
So if we take , and we note that . Cool.
This should seem kind of a weird way to map things. It seems like a very strange kind of map! BUT, it actually turns out that every linear functional can be written this way. Every single one of them. This actually blew my mind the first time that I read it, and if it doesn’t blow your mind, think about what a linear functional is and then think about what inner products do: what do they have to do with each other?!
Either way, let’s prove it. But first, as lemma.
Lemma (Representing an Element Using an Orthonormal Basis): If we have an orthonormal basis for our finite nontrivial vector space , then we can write any vector as .
We’re going to use this lemma in our next proof, so we might as well state it now. In fact, we’re going to be using this a lot, so make sure you “get” this!
Proof. Welp, we know we can write for some scalars . That’s just the definition of having a basis. Now, let’s take the inner product of both sides of this with an arbitrary orthonormal basis element.
by the nature of an orthonormal basis and inner products. Note that
which implies that . We can plug this inner product thing in wherever we see for each , and so plugging it into
gives us exactly
as we wanted. .
Theorem: Let be a nontrivial finite dimensional vector space and let be its underlying field of scalars. Then if is a linear functional, there exists a unique vector such that for every .
Before we do this proof, just, again, look at how damn powerful this theorem is: it states that if we have any linear functional, we can reduce it to this form! So damn cool.
Proof. As per usual, we’re going to show that this vector actually exists, and then we’ll show that it must be unique if it does exist. This is a pretty standard way to prove something like this. Let’s go.
First, we know by Gram-Schmidt that there is an orthonormal basis for , so let’s call it . Because this is an orthonormal basis, we can represent as
by the lemma above. Now, we have
where those lines denote the conjugate element. It’s extremely important that you understand this last inequality (where we have essentially done the opposite of ) so keep looking at it until you get it. Now, note then if we set we will have for every as we needed! Yesssss.
Now, the (slightly less interesting) uniqueness part. Suppose there were two such elements such that and . Then we have
But note that, for the left hand side,
and so we’re left with
But is just some arbitrary element. Let’s let . Then
and since if and only if , we have that , which implies . Uniqueness. .
Holy crap that’s cool. Okay, that’s enough weird stuff for this linear map session. Next time we’re going to talk about adjoints, because they’re really what we wanna talk about. After that, we’re going to get to something cool called the Spectral Theorem. Doesn’t that just sound bad-ass? Spectral. Spectral.