The Argument Principle, The Winding Number, and Rouché’s Theorem (Part 1.)
December 15, 2010
(In this part: The Argument Principle and the Winding Number.)
Each of these three theorems (the argument principle, the winding number theorem, and Rouché’s theorem) are all interesting in their own right, but something really special happens when you put them into a cocktail mixer and shake them up together. Really; I’m not a fan of analysis, but what we’re doing in this post I think of as almost magical.
Throughout complex analysis, after visiting analytic functions, we get to Residue theory. The point of a residue is that when we integrate an analytic complex function about a loop, everything "dies" off (because each term has an entire anti-derivative) except for the term; thus, traveling around the loop, this term contributes some to the integral. Specifically, it contributes times the "scaling factor" coefficient of the term, which is called the residue. I’m not sure if they call it the "residue" because it’s what’s left over when you integrate about a curve, but it’s a nice way to think about it.
Either way, the residues become quite important: we differentiate (in the non-calculus sense of the word!) different "types" of terms, and we call them poles of different orders; for example, is a pole of order 5 at ; on the other hand, is a pole of order 65 at the point . There are a number of theorems which talk about how to integrate certain functions with poles like this.
This is certainly a nice thing to look at in complex analysis, as integrating around curves gives us a lot of information (and even leads to unexpected applications, like integrating integrals of real functions on the entire real line) but what else would we like to know about complex functions? Well, when we look at real valued functions, we always like to know where the zeros of the function are; this, in turn, allows us to find maximums, minimums, and so forth.
Now, the theorem that follows is an unexpected relationship (at least, it was for me!) between the number of poles and the number of zeros in a given interior of a simple closed curve.
Theorem (Argument Principle): If is a meromorphic function (that is, it is holomorphic at all but a finite number of finite poles) inside of and on some simple closed contour , then if has no zeros or poles on the curve , then
where is the number of zeros inside counted with multiplicity and is the number of poles inside counted with orders.
Proof. This proof is surprisingly easy, and it’s one where you can just "follow your nose" once you know where to begin. If is a zero of then we can write by factoring where is the multiplicity and we note that or we could just factor out another factor of . Taking the derivative of gives us:
by the product rule. Thus, we divide and note that
Note the first term: this term is our residue term. Since we have that the latter term is analytic at and hence in a neighborhood of . This is nice. But this means that the residue of will just be , the multiplicity of the zero . Thus the residue at is .
Okay. Now, do the same thing for a pole. If has a pole at then almost the same thing happens! We can write by factoring where is the order of and we note that or we could just factor out another factor of . Taking the derivative, we find that
Dividing, we find that
and in the same way as before, we note that the reside of the first term is , the order of the pole, and the second term is analytic in and around . Thus the residue at is .
Therefore, each zero of of multiplicity gives us a residue of for and each pole of order gives a residue of for .
It is up to the reader to show that the only poles that occur in are the zeros and poles of . This isn’t that hard, it’s a quick contradiction, but it’s a good thing to write out for yourself. Now, using the residue theorem, we note that
where is the multiplicity of the -th zero, and is the order of the -th pole. Thus, as in our original statement of the theorem,
as required. .
Alright, good, that wasn’t so bad. You may be wondering where this idea of comes about. How did someone ever think to look at the integral of that? Well, it’s actually not as strange as you might think: in particular, where have you seen
before? Hm. Right! Logs! So, in fact, this quotient’s integral is measuring changes in as we go about our curve . We know a little bit about the change that this’ll have though, right? It’s going to be some multiple of , since our curve is a closed loop and the Cauchy residue theorm states this, and so whatever our is multiplied by, this is essentially "how far" our function goes around (think, again, of the Log function as a helix; this is saying how far our function goes "up or down the helix"). We have just proved that this number will be exactly , the number of zeros (with mult) minus the number of poles (with orders), so we give a special name to this value.
Definition: The Winding Number of about the curve about 0, denoted , is in the notation above.
I should point out that because our functions are centered at 0, this is the winding number about the point 0 and by the paragraph above it sort of measures "how many times" wraps a loop around the point . I should also point out that this number has heavy links to topology. (Explain this number in terms of its value in the fundamental group, for example.) In particular, the "degree" of a map (especially seen in May’s and Hatcher’s proofs of ) is defined using the concept of a winding number. We will go over the winding number a bit more later when we need it in topology (and, at that point, we’ll draw a bunch of pictures of it).
Part 2 of this post will be on Rouché’s theorem as well as a number of examples that make use of this. Rouché’s theorem is also used in a surprising number of other proofs (open mapping theorem, yet another fundamental theorem of algebra proof, and a few others).