Adjoints!, or: Why Aren’t These in My Crappy Linear Algebra Book? part 1.

July 24, 2010

Now, for this post, we’re going to assume something kind of unique: namely, we’re going to assume you know how to work inner products.  Yes, alas, I’m going to make a leap of faith here — but, reader, do not let me down!  I’ve posted a pdf explaining what they are a few posts ago, and we’ll be going over some basic properties of them as I go on in this post.  But the best way to learn these is to really do a bunch of problems that deal with them.  Almost every linear algebra book that I’ve seen has a huge section on these.

On the other hand, nearly every (basic) linear algebra book that I’ve seen has either a passing mention of adjoints or no mention at all.  This is more of an advanced topic, but it really doesn’t need to be — it’s not difficult, it’s just not all that intuitive.

But let’s stop talking about it and let’s start doing it.

Let’s let V and W be nontrivial finite dimensional vector spaces of dimension n and m respectively.  Let’s let T: V\rightarrow W.  Then we’re going to define the adjoint of T, which we write as T^{\ast}, in the following way:

  • Fix an element w\in W (which is arbitrary but remains fixed).
  • Define T^{\ast} by \langle T(v), w\rangle = \langle v, T^{\ast}(w)\rangle for all v\in V.

And that’s it.  It looks much more simple than it actually is in most cases.  Let’s just do two easy examples so that you get the hang of it.

Examples.

(1) Alright, let’s do a trivial example first.  If we have V is a finite nontrivial vector space, then define T:V\rightarrow V by T(v) = v.  Note that this is not a linear functional as we defined in the last post, because a linear functional goes from a vector space to its underlying field.  This particular linear map is called the identity map, and it goes from the space to itself.

So, let’s find the adjoint.  Let’s fix a w\in V and then consider

\langle T(v), w\rangle = \langle v, w\rangle = \langle v, T^{\ast}(w)\rangle

What should we define T^{\ast} as?  Well, looking at the last equality, it’s obvious: we should define T^{\ast}(w) = w for all w\in V.  This way, the equality works.  Nice.  So the adjoint is also the identity map.  Weird.

(2) Let’s do a slightly less trivial example now.  Let’s let V = {\mathbb R}^{3} and let’s consider the standard inner product on {\mathbb R}^{3}, and let’s let our linear transform be T:{\mathbb R}^{3}\rightarrow {\mathbb R} defined by T((x,y,z)) = x + 2y - z.  Note that the number we get out is a scalar, so it lives in {\mathbb R} as required!  Nice.  You can check that this is a linear map, or you can trust me; either way, let’s find the adjoint.  Let’s fix w\in {\mathbb R}.

\langle T((x,y,z)), w\rangle = \langle x + 2y - z, w\rangle = wx + 2wy - wz

and if we define T^{\ast}(w) = (a,b,c) for some (a,b,c)\in {\mathbb R}^{3}, then

\langle (x,y,z),T^{\ast}(w) \rangle = \langle (x,y,z),(a,b,c)\rangle = xa + yb + zc

and making these two lines equal, we get that

xa + yb + zc = wx + 2wy - wz

which means that a = w, b = 2w, c = -w.  So we should define T^{\ast}(w) = (w,2w,-w).  This gives us our adjoint.

More Adjoint Stuff.

You should’ve noticed by now that the adjoint is a little bit different from the regular linear map.  Also, if the linear map T:V\rightarrow W, then the adjoint will go from T^{\ast}:W\rightarrow V.  They swap domain and co-domain.

There are a whole bunch of properties that the adjoint and the regular linear map share, and we’ll prove them as we come to them.  But there are three properties which I really like proving, so we’re going to do them now.

Theorem: For V, W nontrivial finite dimensional vector spaces and F is a field (like the reals or the complexes), then we have for all a\in F and T:V\rightarrow W that (aT)^{\ast} = \bar{a}T^{\ast}.

Proof. By definition, we have

\langle aT(x), y \rangle = a\langle T(x),y \rangle = a\langle x, T^{\ast}(y) \rangle = \langle x, \bar{a}T^{\ast}(y)\rangle

which completes the proof.  \Box.

Theorem: For V,W same as above, and T:V\rightarrow W, we have that (T^{\ast})^{\ast}) = T.

Proof. We have \langle T(x),y \rangle = \langle x, T^{\ast}(y) \rangle = \langle (T^{\ast})^{\ast}(x), y\rangle and so the equality of the left-most-side and the right-most-side prove the theorem.  Make sure you know where the last equality is coming from.  \Diamond.

Theorem: If V is the same as above, and Id: V\rightarrow V is the identity operator, then Id = (Id)^{\ast}.  In other words, the identity is its own adjoint.

Proof. We have that

\langle Id(x), y\rangle = \langle x,y \rangle = \langle x, Id^{\ast}(y)\rangle

which implies that Id^{\ast}(y) = y for all y, and so it is the identity function.  \Diamond.

One important theorem that I leave to you is the fact that the adjoint works in products the same kind of way that inverses do — namely, that

Theorem: For U, V, W are nontrivial finite vector spaces with T:V\rightarrow W and S:W\rightarrow U, then (ST)^{\ast} = T^{\ast}S^{\ast}.

Proof. Up to you!

I hope these theorems have been kind of fun to prove.  Either way, at this point, the adjoint should be a mysterious not-very-useful-looking thing, but we’re going to show in the next post that we have a very nice identity between M(T) and M(T^{\ast}); namely, that they’re conjugate transposes of each other, which I’ll define next time.  This makes it so that we don’t have to go through a complicated calculation like above in the second example in order to find the adjoint of some map; if we have the matrix associated to it, it’s a near-trivial one-line manipulation to find it!  And that’s pretty sweet, no lies.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: