## Linear Maps and Inner Products: Will They Ever Get Along?

### July 24, 2010

Short answer here: yes, inner products and linear maps can be friends, but we have to be careful about it.  Let’s assume for this post that $V$ is a nontrivial finite vector space of dimension $n$.  Now, let’s say that $F$ is a field of scalars (such as the reals or the complexes) and let’s consider a linear map $T:V\rightarrow F$.  We call such a map a linear functional; that is, if its domain is a vector space and its codomain is the underlying field of scalars.  This really isn’t as scary as it sounds.

For the heck of it, though, let’s do an example.  Let’s say that $T:{\mathbb R}^{4}\rightarrow {\mathbb R}$.  Notice that we have the domain as some vector space (namely the vector space ${\mathbb R}^{4}$) and we’re mapping into our field of scalars (the reals).  What could this map do?  Well, how about something like: $T((a,b,c,d)) = a$, giving the first coordinate of some vector as a scalar.  Yeah, but, okay, that’s kind of boring.  So what about this one: let’s fix an element in ${\mathbb R}^{4}$ and call it $w$; now let’s define $T(x) = \langle x,w \rangle$.  In other words, this map takes the inner product of whatever we put in with some fixed vector in the space.

Now, think about that last example.  Read it a bunch.  Let’s try it once: let’s use all the stuff in the last paragraph, and the map will be that weird inner product thing.  Let’s actually make our $w = (1,2,3,4)$ and have the standard inner product.  Then what does $T$ do?  Well, for a general element $(a,b,c,d)\in {\mathbb R}^{4}$ we have that

$T((a,b,c,d)) = \langle (a,b,c,d),(1,2,3,4)\rangle = a + 2b + 3c + 4d$.

So if we take $(a,b,c,d) = (0,1,0,2)$, and we note that $T((0,1,0,2)) = 10$.  Cool.

This should seem kind of a weird way to map things.  It seems like a very strange kind of map!  BUT, it actually turns out that every linear functional can be written this way.  Every single one of them. This actually blew my mind the first time that I read it, and if it doesn’t blow your mind, think about what a linear functional is and then think about what inner products do: what do they have to do with each other?!

Either way, let’s prove it.  But first, as lemma.

Lemma (Representing an Element Using an Orthonormal Basis): If we have an orthonormal basis $\{e_{1}, e_{2},\dots, e_{n}\}$ for our finite nontrivial vector space $V$, then we can write any vector $v\in V$ as $v = \langle v,e_{1}\rangle e_{1} + \cdots + \langle v,e_{n}\rangle e_{n}$.

We’re going to use this lemma in our next proof, so we might as well state it now.  In fact, we’re going to be using this a lot, so make sure you “get” this!

Proof. Welp, we know we can write $v = a_{1}e_{1} + \cdots + a_{n}e_{n}$ for some scalars $a_{1}, \dots, a_{n}$.  That’s just the definition of having a basis.  Now, let’s take the inner product of both sides of this with an arbitrary orthonormal basis element.

$\langle v, e_{j}\rangle = \langle a_{1}e_{1} + \cdots + a_{n}e_{n}\rangle = \langle a_{j}e_{j},e_{j}\rangle$

by the nature of an orthonormal basis and inner products.  Note that

$\langle a_{j}e_{j}, e_{j}\rangle = a_{j}\langle e_{j},e_{j}\rangle = a_{j}$

which implies that $\langle v,e_{j}\rangle = a_{j}$.  We can plug this inner product thing in wherever we see $a_{j}$ for each $j$, and so plugging it into

$v = a_{1}e_{1} + \cdots + a_{n}e_{n}$

gives us exactly

$v = \langle v,e_{1}\rangle e_{1} + \cdots + \langle v,e_{n}\rangle e_{n}$

as we wanted.  $\Box$.

Theorem: Let $V$ be a nontrivial finite dimensional vector space and let $F$ be its underlying field of scalars.  Then if $T:V\rightarrow F$ is a linear functional, there exists a unique vector $v\in V$ such that $T(u) = \langle u,v\rangle$ for every $u\in V$.

Before we do this proof, just, again, look at how damn powerful this theorem is: it states that if we have any linear functional, we can reduce it to this form!  So damn cool.

Proof. As per usual, we’re going to show that this vector actually exists, and then we’ll show that it must be unique if it does exist.  This is a pretty standard way to prove something like this.  Let’s go.

First, we know by Gram-Schmidt that there is an orthonormal basis for $V$, so let’s call it $\{e_{1}, e_{2}, \dots, e_{n}\}$.  Because this is an orthonormal basis, we can represent $u\in V$ as

$u = \langle u,e_{1}\rangle e_{1} + \cdots + \langle u,e_{n}\rangle e_{n}$

by the lemma above. Now, we have

$T(u) = T(\langle u,e_{1}\rangle e_{1} + \cdots + \langle u,e_{n}\rangle e_{n})$

$= \langle u,e_{1}\rangle T(e_{1}) + \cdots + \langle u,e_{n}\rangle T(e_{n})$

$= \langle u,\overline{T(e_{1})}e_{1}\rangle + \cdots + \langle u,\overline{T(e_{n})}e_{n}\rangle$

$= \langle u, \overline{T(e_{1})}e_{1} + \cdots + \overline{T(e_{n})}e_{n}\rangle$

where those lines denote the conjugate element.  It’s extremely important that you understand this last inequality (where we have essentially done the opposite of $\langle u, a+b\rangle = \langle u, a\rangle + \langle u, b\rangle$) so keep looking at it until you get it.  Now, note then if we set $v = \overline{T(e_{1})}e_{1} + \cdots + \overline{T(e_{n})}e_{n}$ we will have $T(u) = \langle u, v\rangle$ for every $u\in V$ as we needed!  Yesssss.

Now, the (slightly less interesting) uniqueness part.  Suppose there were two such elements such that $T(u) = \langle u,v_{1}\rangle$ and $T(u) = \langle u,v_{2}\rangle$.  Then we have

$\langle u,v_{1}\rangle - \langle u,v_{2}\rangle = T(u) - T(u) = 0$

But note that, for the left hand side,

$\langle u,v_{1}\rangle - \langle u,v_{2}\rangle = \langle u,v_{1} - v_{2}\rangle$

and so we’re left with

$\langle u,v_{1} - v_{2}\rangle = 0$

But $u$ is just some arbitrary element.  Let’s let $u = v_{1} - v_{2}$.  Then

$\langle v_{1} - v_{2},v_{1} - v_{2}\rangle = ||v_{1} - v_{2}|| = 0$

and since $||v|| = 0$ if and only if $v = 0$, we have that $v_{1} - v_{2} = 0$, which implies $v_{1} = v_{2}$.  Uniqueness.  $\Box$.

Holy crap that’s cool.  Okay, that’s enough weird stuff for this linear map session.  Next time we’re going to talk about adjoints, because they’re really what we wanna talk about.  After that, we’re going to get to something cool called the Spectral Theorem.  Doesn’t that just sound bad-ass?  Spectral.  Spectral.

Spectral.

### 2 Responses to “Linear Maps and Inner Products: Will They Ever Get Along?”

1. Dan Katz said

Wow, that IS pretty cool. Only one comment, you should probably clarify that this (obviously) only holds if the vector space IS in fact endowed with an inner product..

2. Sheens said

You’re great! Thanks, Ive been stewing over this for ages and I couldn’t seem to get my head around it!