Seemingly unrelated is Gauss’ Mean Value Theorem, which is significantly cooler (in my opinion) than the standard mean value theorem of the reals.  We will define it formally below, but it says the follwing: if f is analytic (equivalent to complex differentiable) on some disk D and a is the center point of this disk, then the average of the values about the boundary of D is equal to f(a).  That is, to find the value of f(a), it suffices to integrate around a circle centered at a and divide by 2\pi (the amount of radians we pass through while integrating).  This is really neat to think about since this tells us not only that, given f there exists some point whose value is equal to the average of the sum of the values of f lying on a circle, but, moreover, that this point is actually the center of the circle.  This is intense stuff.

 

Theorem (Gauss’ Mean Value Theorem).  Let f(z) be analytic on some closed disk D which has center a and radius r.  Let C denote the boundary of the disk (that is, C is the circle bounding D).  Then we have that \displaystyle f(a) = \frac{1}{2\pi}\int_{0}^{2\pi}f(a+re^{i\theta})\,d\theta.

 

The proof of this theorem is pretty straight forward and uses the Cauchy integral formula and some easy substitution.

 

Proof.  Note that we have \displaystyle f(a) = \frac{1}{2\pi i}\oint_{C}\frac{f(z)}{z-a}\,dz.  The equation of a circle with radius r and center a is given by z = a + re^{i\theta} where \theta runs from 0 to 2\pi (if you don’t believe me, plot some points!).  Substituting this value into the integral and noting that dz = ire^{i\theta} we have that

\displaystyle f(a) = \frac{1}{2\pi}\int_{0}^{2\pi}\frac{f(a + re^{i\theta})ire^{i\theta}}{re^{i\theta}}\,d\theta = \frac{1}{2\pi}\int_{0}^{2\pi}f(a+re^{i\theta})\,d\theta

as required.  \diamond

 

Why bring up this neat little theorem?  Well, by itself it doesn’t seem to be all that useful — when would we be able to calculate and sum up a whole ton of values of an analytic function surrounding a point, but not be able to find the point itself?  But this little theorem packs some punch as a way of bounding certain values.  In particular, it gives a neat proof of the Maximum Modulus Theorem.  You might have guessed this from the title of this post.

 

First, let’s note something quickly.

 

Lemma.  Given the assumptions in Gauss’ MVT, we have \displaystyle |f(a)|\leq \frac{1}{2\pi}\int_{0}^{2\pi}|f(a+re^{i\theta})|\,d\theta

 

Be careful here in thinking that this should be an equality; we are now looking at the modulus of our value, and the modulus of each point on the circle.  But this lemma comes almost for free:

 

Proof.  We have |f(a)| =\left|\frac{1}{2\pi}\int_{0}^{2\pi}f(a+re^{i\theta})\,d\theta\right| by using Gauss’ MVT and simply taking the norm of both sides.  Note that

\displaystyle\left|\frac{1}{2\pi}\int_{0}^{2\pi}f(a+re^{i\theta})\,d\theta\right| = \frac{1}{2\pi}\left|\int_{0}^{2\pi}f(a+re^{i\theta})\,d\theta\right|

\displaystyle \leq \frac{1}{2\pi}\int_{0}^{2\pi}|f(a+re^{i\theta})|\,d\theta

whence the inequality above.  \diamond

 

This lemma tells us that the value of the center of any circle is bounded by the sum of the modulus of the values of the points of that circle.  We’ll see why this is the crucial bound we’ll need in the MMT’s proof below. 

 

Theorem (Maximum Modulus Theorem).  Given f analytic on some domain D, if f is non-constant on D then the maximum value of |f(z)| for z\in D will occur on the boundary of D.  (Alternatively, if |f(z)| is maximized by some value not on the boundary of D, then f is constant on D.)

 

Proof.  We’ll split this into two steps.  The first step is for the specific case that D is a closed disk and our maximum modulus occurs at the center of this disk.  The second step will be to get some arbitrary space D and construct some closed disks in the interior of D and "piece these together" to show that f is constant on all of D.

Step 1: Let’s suppose that our maximum modulus is at the center point of D, which we will call z_{0}; that is, we are supposing that |f(z)|\leq |f(z_{0})| for every z\in D.  Since z_{0} is an interior point, we have that there is some r-ball about z_{0} (that is, a ball of radius r) which is completely contained in D.  Let the C denote the circle of radius r centered at the point z_{0}.  By our second lemma above we have that

\displaystyle|f(z_{0})|\leq \frac{1}{2\pi}\int_{0}^{2\pi}|f(z_{0}+re^{i\theta})|\,d\theta

BUT, using that |f(z)|\leq |f(z_{0})| for every z\in D we have that

\displaystyle \frac{1}{2\pi}\int_{0}^{2\pi}|f(z_{0}+re^{i\theta})|\,d\theta \leq \frac{1}{2\pi}\int_{0}^{2\pi}|f(z_{0})|\,d\theta = f(z_{0}).

Stringing these inequalities together and suggestively re-writing |f(z_{0})| = \frac{1}{2\pi}\int_{0}^{2\pi}|f(z_{0})|\,dz, we have that

\displaystyle \frac{1}{2\pi}\int_{0}^{2\pi}|f(z_{0})|\,d\theta = \frac{1}{2\pi}\int_{0}^{2\pi}|f(z_{0}+re^{i\theta})|\,d\theta

and by subtracting,

\displaystyle 0 = \frac{1}{2\pi}\int_{0}^{2\pi}|f(z_{0})| -|f(z_{0}+re^{i\theta})| \,dz

but since the integrand is always positive or zero (why?) it must be the case that

\displaystyle |f(z_{0})| -|f(z_{0}+re^{i\theta})| = 0

or, in other words, |f(z_{0})| =|f(z_{0}+re^{i\theta})|.  Since r was arbitrary, we conclude that |f(z_{0})| = |f(z)| for every z\in D.

 

Step 2: Now suppose we have some arbitrary domain D and f is analytic on all of D.  I will hand-wave a bit here, but you can fill in the details.  Note that a domain (in this context) necessarily means open and path-connected (and, in fact, it usually denotes a simply connected open subset of {\mathbb C}).   Suppose that our maximum modulus occurs at some point on the interior of D which we will call z_{0}.  Now, given any other point w\in D we have some path from z_{0} to w which is completely contained in D.  In fact, we can make this path a finite polygonal path; that is, a path made out of a finite number of straight lines piecewise-connected together; we will denote this z_{0}L_{0}z_{1}L_{1}\cdots L_{n-1}w, where the L_{i} is the line with endpoints z_{i-1} and z_{i}.  I will let you work the details out here, but it can be done. 

 

image

 

Now, the polygonal line might be right next to a boundary, and we don’t want to accidentally hit it when we start making balls around points, so let \epsilon denote whichever is smaller: the distance from the polygonal line to the boundary, or 1.  So, if your polygonal line is right next to the boundary, we might need to make \epsilon pretty small; but if not, we can just let it be whatever we want, so we might as well make it 1.  Note that since D is open, no point on the polygonal path should be on the boundary.  Now, let’s break up our polygonal path into another polygonal path z_{0}L_{0}z_{1}L_{1}\cdots L_{n-1}w where each L_{i} has length less than \frac{\epsilon}{4}.  It is clear we can do this just by partitioning each straight line in our original path so that their lengths are appropriately small; note, we still only have a finite number of endpoints z_{i}.  That’s important.

image

 

(In the picture above, I’ve made the original endpoints blue and then partitioned our polygonal path with the new red endpoints to make each line segment less than \frac{\epsilon}{4}.)

Now everything is going to fall pretty quickly, so keep on your toes.  First, make a disk of radius \epsilon (as defined above) around each z_{i} and call it D_{i}.  Now note that, by our previous step, since our maximum modulus occurs at z_{0}, we have f(z) = f(z_{0}) for every point z\in D_{0}.  But z_{1} is in D_{0}

image

(This picture is not drawn to scale because I am not a good artist; this is illustrating z_{1} being inside the circle D_{0}.)

So now z_{1} is also of maximum modulus (since z_{0} was) and so f(z_{1}) = f(z) for every point in D_{1}.  Continue this and we will obtain f(z_{0}) = f(w).  Since w was an arbitrary point, it follows that f(z_{0}) = f(w)  for every w\in D.  Hence, if f attains a maximum modulus on the interior of some set D, then it is constant.  This implies directly that any non-constant analytic function achieves its maximum modulus on the boundary.  \Box

Jordan’s Lemma.

August 9, 2011

[This post is for those of you who are already comfy with doing some basic contour integrals in complex analysis.]

 

So you’re sitting around, evaluating contour integrals, and everything is fine.  Then something weird comes up.  You’re asked to evaluate an integral that looks like

\displaystyle \int_{-\infty}^{\infty} e^{aix}g(x)\, dx

for g(x) is continuous.  Eek.  Don’t panic though, because Camille Jordan’s gonna help you out.

image 

Read the rest of this entry »

I’ve been a bit busy moving and packing my things, but I’ve tried to keep my problem-doing up to practice for my analysis qual.  One problem that seems to come up quite a bit in the complex analysis part of the exam is something like the following:

 

Question.  Suppose that f is entire which satisfies |f(z)| \leq A|z|^{k} + B for every z\in {\mathbb C} and every A,B > 0.  Prove that f is a polynomial of degree at most k.

 

Read the rest of this entry »

I’ve been up for a while doing practice qualifying exam questions, and sometimes I hit a point where I just do whatever it is that comes to my mind first, no matter how tedious or silly it seems.  This is a bad habit.  I’ll show why with an example.

 

Here’s the question.  Let C be the unit circle oriented counterclockwise.  Find the integral

\displaystyle\int_{C}\frac{\exp(1 + z^{2})}{z}\, dz.

 

The sophisticated reader will immediately see the solution, but humor me for a moment.  I attempted to do this by Taylor expansion.  The following calculations were done:

 

= \displaystyle\int_{C} \frac{1 + (1+z^{2}) + (1 + z^{2})^{2} + \cdots}{z}\, dz

 

To which the binomial theorem was applied to the numerator terms to obtain:

 

= \displaystyle\int_{C} \frac{1 + (1+z^{2}) + \frac{\sum_{n=0}^{2}\binom{2}{n}z^{n}}{2!} + \frac{\sum_{n=0}^{3}\binom{3}{n}z^{n}}{3!} + \cdots}{z}\, dz

 

And at this point we note that everything is going to die off when we take the integral except the coefficient of the \frac{1}{z} term.  Our residue (the coefficient) will be:

1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots

which can also be written (slightly more suggestively) as:

\frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots

which we should recognize as the Taylor expansion of e^{z} at the point z = 1.  Nice!  Now we note that plugging in e^{i\theta} to take the contour integral (ignoring all those terms which don’t matter) will force us to integrate

\displaystyle e \int_{0}^{2\pi}\frac{1}{e^{i\theta}}ie^{i\theta}\, d\theta = ie\int_{0}^{2\pi}\, d\theta = 2\pi i e.

Cutely, if we think of the Greek letter \pi as being a "p", this solution spells out "2pie".

 

But now, readers, let’s slow down.  This is, indeed, the correct answer.  But if I had just looked at the form of the integrand, I would have seen an everywhere analytic function divided by a form of z - z_{0}.  This screams Cauchy Integral Formula.  Indeed, according to the CIF, we should get the solution as

2\pi i \exp(1 + 0^2) = 2\pi i e

which is exactly what we got before, but only took about 4 seconds to do.  It’s nice to be able to check yourself by doing something two different ways, but when time isn’t on your side (like in a qualifying exam situation, for example!) then remember:

Think before you Taylor Expand.

I was skimming over Brown’s Complex Analysis book, and I came across a neat little exercise I thought I would share.  The solution is not difficult — indeed, it is just a manipulation of equations — but the idea is interesting and especially telling about the strange kinds of not-so-symmetric things that go on in complex analysis. 

Read the rest of this entry »

(I’ve decided against giving a proof of Rouché’s theorem until such a time as I find one that doesn’t use algebraic topology or isn’t tedious as hell.)

 

Let’s simply state Rouché’s theorem, and then we’ll talk about how to actually apply Rouché’s theorem.

 

Read the rest of this entry »

(In this part: The Argument Principle and the Winding Number.)

Each of these three theorems (the argument principle, the winding number theorem, and Rouché’s theorem) are all interesting in their own right, but something really special happens when you put them into a cocktail mixer and shake them up together.  Really; I’m not a fan of analysis, but what we’re doing in this post I think of as almost magical

Read the rest of this entry »

EDIT: It seems that scribd is now behind a paywall now.  :(  

Brown and Churchill (8th ed) was the book I used for the second complex analysis class I’ve had to take so far (the first was Lang).  My class went over the first six chapters and half of the seventh: so, up to the middle of the section on applications of residues.

To prep for the final, I compiled a quick, slightly-shorter list of things that I feel the complex student should know if they’ve used this book and have gotten to around the same point.  I’ve excluded the chapter on applications of residues, since it’s a relatively short chapter with better pictures in the text than ones I could draw at 5am.  Because sharing is caring, below is a link to the pdf.  Enjoy!

http://www.scribd.com/doc/45170305

In another post, I noted something a bit strange at first reading: some authors use the word holomorphic to describe a power series expansion and reserve analytic for complex differentiable while other authors swap those terms.  I then noted that “this doesn’t matter.”  Well, why not?  I mean, definitions are pretty important in mathematics!  My reasoning is: these are really the same thing.  If f is holomorphic then it is also analytic, and vice versa.  I’ve been putting off doing this proof for way too long, so let’s just get it over with.  It’s not hard, it’s just an analysis proof, which means that it’s extremely easy to describe (“take a little ball and do something in it”) but extremely tedious to work out (“take an epsilon such that this epsilon is less than the sum of the minimum of the supremums of…”) but I’m going to try to have a complete proof and motivate every step.

After this, I’ll give a short proof that if a function is complex differentiable (holomorpic, to me.) once, then it is complex differentiable infinitely many times.  It’s not a direct corollary, but it’s a nice fact to know.

Read the rest of this entry »

Morera’s Theorem.

October 29, 2010

Morera’s theorem, named after the mathematician Giacinto Morera whose name is pretty sweet but is second only to his ultra-fly mustache

Giacinto_Morera_

is an extremely important result in complex analysis: it states that if f is a continuous function defined on an open set D in the complex plane, and, in addition, we have that the integral around every closed curve is zero for every closed curve C in X, then, in fact, f must be complex differentiable everywhere in D.  This is a common tool to use in the proofs of other theorems, as well as in its own right showing that a continuous function is actually much nicer than “just” continuous.

Read the rest of this entry »