## Greatest Common Divisors; Euclidean Algorithm.

### January 5, 2013

Recently, I’ve been learning to program in a new language and have been doing Project Euler problems for practice — of course, as the name suggests, most of these problems deal explicitly with problems which must be solved (efficiently) with mathematical techniques. Two of the most common algorithms I’ve used are: Prime Testing, GCD Finder. I’ll post about the former later, but the latter is an interesting problem in its own right:

**Initial Problem**. Given two natural (positive whole) numbers called , can we find some other natural number that divides both of them?

This problem is a first step. It’s nice to be able to write numbers as a multiple of some other number; for example, if we have 16 and 18, we may write them as and , thus giving us some insight as to the relationship between these two numbers. In that case it may be easy to see, but perhaps if you’re given the numbers 46629 and 47100, you may not realize right away that these numbers are and respectively. This kind of factorization will reveal "hidden" relationships between numbers.

So, given two numbers, how do we find if something divides both of them — in other words, how do we find the *common divisors *of two numbers? If we think back to when we first began working with numbers (in elementary school, perhaps) the first thing to do would be to note that 1 divides *every number*. But that doesn’t help us all that much, as it turns out, so we go to the next number: if both numbers are even, then they have 2 as a common factor. Then we "factor" both numbers by writing them as and then attempt to keep dividing things out of the *something*. We then move onto 3, skip 4 (since this would just be divisible by 2 twice), go onto 5, then 7, then…and continue for the primes. This gives a *prime *factorization, but we have to note that if, say, 2 and 5 divide some number, then so does 10. These latter divisors are the *composite *factors.

This seems excessive, but it is sometimes the only way one can do it.

**Anecdote!: **On my algebra qualifying exam, there was a question regarding a group of order 289 which required us to see if 289 was prime or not; if not, we were to factor it. We were not allowed calculators, so what could we do? Try everything. Note that we only need to try up to the square root of the number (which we could estimate in other ways), but it’s still a number of cases. If you check, none of the following numbers divide into 289: 2, 3, 5, 7, 11, 13. At this point, I was about to give up and call it a prime, but, for whatever reason, I decided to try 17. Of course, as the clever reader will have pointed out, . It is not prime. There was, luckily, only one student who thought it was prime, but it points out how the algorithm above is not entirely trivial if one does not have access to a computer or calculator.

Once we have a common divisor, or a set of common divisors, a natural thing to want to do is to find the *biggest* (we already have the smallest, 1) since in this way we can write our numbers with the largest common factor multiplied by some other number. It will, in effect, make things prettier.

**Real Problem.** Find the *greatest* divisor which is common to two natural numbers, .

If you were just learning about this kind of thing, you may spout out the following solution: find *all *of the common divisors, then pick the greatest. While this is not especially efficient, it *is* a solution. Unfortunately, even for small numbers, this gets out of hand quickly. For example, 60 and 420 have the following common divisors: 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60. This takes a while to compute by hand.

Even if we were to find prime factors, this would be and , which gives us that they share a number of prime factors. A bit of thinking gives us that we take all of the prime factors they "share" and multiply them together to get the greatest common divisor. This is another potential solution which is much faster than simply listing out all of the common divisors. Unfortunately, this falls prey to the same kind of trap that other prime-related problems do: it is, at times, especially difficult to factor large composite numbers. For example, the "reasonably small" number 49740376105597 has a prime factorization of ; this is not at all efficient to factor if one does not have a computer or a specialized calculator with a factoring algorithm on it. As a mean joke, you may ask your friend to factor something like 1689259081189, which is actually the product of the 100,000th and 100,001st prime — that is, they would need to test 99,999 primes before getting to the one which divides the number. If they divided by one prime per second (which is quite fast!) this would take them 1 day, 3 hours, and 46 minutes. Not especially effective, but it *will *eventually get the job done.

**Real Problem, With Efficiency: **Find the greatest divisor which is common to two natural numbers, , but do so in an efficient manner (we’ve all got deadlines!).

We need to sit down and think about this now. We need an entirely new idea. We note, at least, that for the two numbers that one of them must be larger than the other (or else the problem is trivial). One thing to try would be to see if the smaller one goes into the larger one (for example, above we had 60 going into 420, which gave us the easy solution that 60 must be the greatest common divisor). If not, maybe we can see how much is left over. That is, if is the larger number,

where here is the number of times goes into without exceeding it, and is the "remainder"; if it’s equal to 0, then evenly divides into , and otherwise it is less than (or else we could divide an additional into ).

Using this, if , we may write ; this means that, in particular, divides and , so it is a factor of and of . But it may not actually be a factor of ; so let’s see how many times it goes into . Using the same process…

and by rearranging, we have that is divisible by . So, is divisible by , but we aren’t sure if is divisible by …if it were, we would be able to say that was a common divisor of and (why?). That’s *something* at least.

The cool thing about our algorithm here is that, because we have that either and we’re done with the algorithm, or and we may form a new equation ; this equation has, on the left-hand side, the number which is less than the previous equation’s left-hand side, which was . Continuing this process, we will have on the left-hand side, each of which is less than the one which came before it. Because for any of the remainders, *eventually* it will become 0 (why?) and this algorithm will terminate. That is, we will have found *some * which is a common divisor for both ; specifically, it will be the such that (or, it may simply be if divides ).

This algorithm, called the *Euclidean Algorithm,* actually does more "automatically": it not only finds a common divisor, but actually finds the *greatest common divisor *of , which, from now on, we will denote . The "proof" of this is simply noting that (we noted this above without making reference to the gcd, but the reader should attempt to go through all the same steps using the idea of the gcd).

So. If you have two natural numbers, , you divide them, find the remainder, write the equation, then continue as above until you get a 0 remainder. Then you pick the remainder directly before you got 0 as your gcd (or, you pick the smaller number if one number divides the other). Pretty simple algorithm, but is it efficient?

Without going into formal "efficiency" definitions, "yes", it is quite efficient. To prove it, let’s take an "average" example using the "large" numbers 1337944608 and 4216212. We note that (by pen and paper, or by using a standard calculator) that

1337944608 = 317(4216212) + 1405404.

Next, we note that

4216212 = 3(1405404) + 0

which instantly gives us the solution . That’s pretty awesome. Note that this was an especially quick trial, but even the "worst" ones are relatively quick.

**Unexpected Corollary!: **For natural numbers, if then there exists integers such that .

This is more useful than you might think at first glance, and we’ll get into why in a later post, but what’s nice about this corollary is that it comes "for free" from the Euclidean algorithm. Note that, since divides , it suffices to prove this corollary for where have . The proof uses induction on the number of steps of the Euclidean algorithm for those numbers, but for those of you who are more experienced and know modular arithmetic, you may enjoy the following simple proof:

*"Clever" Proof of the Corollary: * Let (for equality, the proof is easy). We will only care about remainders in this proof, so we will look at some numbers modulo . Consider

Note there are exactly remainders here and that the remainder never occurs (since are relatively prime). Suppose that for each of the ; that is, the remainder 1 does not ever show up in this list. By the pigeon-hole principle (as there are remainders but only possible values for the remainders) we must have that for some . That is, we have

which implies

but this is impossible, since it implies that either or is some integer multiple of , but and we have assumes are relatively prime. Hence, the remainder must occur. That is, for some and

But what does this mean? It means that there is some integer such that . To make this prettier, let and we find that there exists integers such that , as required. .

Pretty slick, no?

## The Burnside Theorem and Counting, part I.

### December 8, 2012

Let’s talk about Burnside’s theorem. First, let me note that there are *two *results commonly called "Burnside’s Theorem." The first one that I learned (which we won’t be discussing in this post) was:

**Theorem (Burnside)**. If is a finite group of order where are non-negative integers and where are primes, then is solvable.

The second one is also a group theoretical result, but a bit more combinatorial-feeling. In some books (and, apparently, Wikipedia) this second result is called Burnside’s Lemma. As noted in the Wikipedia article, this theorem was not even due to Burnside, who quoted the result from Frobenius, who probably got it from Cauchy.

Let’s get some definitions down. As usual, we’ll denote the *order* of the group by , and our groups will *all be finite in this post*. If we have a group which acts on a set , then given some fixed we define the set ; this is, of course, the fixed points in when acted on by the element ; for the remainder of the post, we will simply write with the set implied. Remember, when a group acts on , the "product" will sit inside of , and we write the action as for an element acting on an element . The *orbit* of a point when acted on by is given by ; we’ll denote this , though this is not standard notation. The orbit, essentially, is all of the possible values that you can get to by acting on with elements in your group.

One thing to note, also, is that orbits are *pairwise* *disjoint. *You should prove this to yourself if you haven’t already, but the idea is like this: if are orbits of elements in then suppose ; then there is some such that , but this implies which implies the orbits are identical (why?). Hence, each element of is in *exactly one orbit*.

We need one more result before we can sink our teeth into Burnside. Remember the fixed point set above? This was all of the elements such that for some . There’s a similar notion called a *Stabilizer*, denoted ; this is saying that we first fix , and then look at all the elements which stabilize it. These definitions are pretty similar feeling (almost like family!) and, in fact, there is a nice relation between the two:

**Notation. **Let denote the set of orbits of when acted on by ; when is a group and is a subgroup this is the same as a quotient.

**Theorem (Orbit-Stabilizer Theorem).** There is a bijection between and .

That is, if we act on by elements of which fix , then we will have the same number of elements as the orbit of . This might seem a little confusing at first, but if you work through it, it’s not so weird.

*Sketch of the Proof. *(Skip this if you’re not comfortable with all this notation above; just go down to the next theorem.) Here, we want to show a bijection. Notice that $G/G_{x}$ is the set of cosets for . We claim that the mapping which sends is well-defined, injective and surjective (but not a homomorphism). First, well defined: if then $latex $hk^{-1}\in G_{x}$ which means that . This implies, after some manipulation, that , which means these elements are identical in . Second, surjectivity is clear. Last, if in the orbit, then which implies which gives ; hence this map is injective. This gives us that our map is bijective.

One immediate corollary is that ; that is, the number of elements in the orbit of is the same as the number of elements in divided by the number of elements in which fix . Think about this for a minute.

## Road to the Proof.

Okay. Now, let’s think about something for a second. What is the sum

telling us? This is the number of elements in which are fixed by some ; but there might be some overlap, since if and , then will be counted twice: one as an element of and once as an element of . But how much overlap is there? This is an innocent seeming question, and you might think something like, "Well, depends on how much stuff stabilizes each .", and this is pretty close to the point.

First, note that

which is just the long way to write out this sum; but, the nice part about that is, we can now think about this as all of the elements of which are stabilized by some (why?). Then,

If you don’t see this, you should prove to yourself why they’re the same sum (why is each element counted in the left-hand side also counted in the right-hand side?). Now, by the Orbit-Stabilizer theorem above, this right-hand sum becomes pretty nice. Specifically,

where we noted in the last equality that is a constant, so we may pull it out of the sum.

Recalling that denotes the number of orbits, we have that if we take a single orbit (call it ) we will be adding up exactly times (since the sum is taken over each so, in particular, over each $x\in A$); hence, we will add for each orbit we have in . That is;

Putting this all together, we have

We clean it up a bit, and state the following:

**Theorem (Burnside’s). **For a finite group and a set , with notation as above, we have .

That is, the number of orbits is equal to the sum, over , of the elements of fixed under , averaged by the number of elements in . Kind of neat.

Next time, we’ll talk about applications!

## Pick’s Theorem.

### November 22, 2012

### Motivation.

In order to teach kids about the concept of area, math teachers sometimes showed pictures which looked like

and we would be asked to "guess the area." If this was being taught to younger students, the way to do it would be to draw little boxes inside and then try to see how many "half-boxes" were left, how many "quarter-boxes" were left, and so forth. If this was a more sophisticated lesson, our teacher would instruct us to cut up the picture into rectangles and triangles — the one above, for example, has a right triangle on the left-hand side and a 4×2 rectangle on the right-hand side. From this, we could easily use the formula for area for each, and we could deduce the total area from these.

So far nothing is too difficult to solve. But, if we adjust the picture slightly (still using only straight lines to construct our figure) we may get something like

Here, it is difficult to tell what to do. We could try to find "easier" triangles to break this into, but none are immediately obvious. It’s not easy to tell what the angles are (except by using some algebraic methods, finding slopes, etc.). The best one could do would be to "guess" what the lengths of the sides were (or the lengths of the base and height of the triangle).

### Motivating Pick’s Theorem.

Before we begin, let me define some of the things we’ll be working with. A *polygon* is a familiar concept, but we ought to say some things about it:

A *lattice polygon* is a figure on the two-dimension lattice (as above) such that all of its edges begin and end on a lattice point, no edges overlap except at a lattice point, and its boundary is a closed path (informally, this means that if we started at a lattice point and "traveled" along an edge and kept going in the same direction, we’d eventually come back to where we started). Moreover, the polygon has to have a distinct interior and exterior (as in, it cannot look like a donut with a hole in the middle). [Topologists, note: this means that the boundary of our figure is homeomorphic to a circle and the interior is simply connected.]

### Easy Polygons.

The "easiest" polygons to work with in terms of area are rectangles: once we find the side lengths, we’re done. Let’s look at the rectangle below, ignoring the stuff in the lower-left for now.

We can easily find the area here: it is a 7×3 rectangle, giving us an area of 21. Notice something kind of neat, though; if we look at each interior point of the rectangle, we can associate a unit square with it (In the picture above, I’ve identified the point "a" with the first unit square "a", and the lattice point "b" with the square "b"). Using just the interior points, we can fill up most of this rectangle with unit squares, as below:

We see that we get "almost" all of the area accounted for if we only identify interior points like this; we miss the "¬" shape on the top. But besides this weird shape, we learned that if we have a rectangle with interior points, we will be able to fill up units squared of area in our rectangle.

If we do the same process (which I mark with a different color) for the boundary points on the top of the rectangle by identifying them with a unit square to their lower-left, we notice that we have to "skip" one on the far-left.

[For clarification, the second lattice point on the top of the rectangle corresponds to the first box, the third point corresponds with the second box, and so forth. We are forced to skip one lattice point (the first one) on the top.]

Notice that if we have interior points and points on the top boundary of the rectangle, then there must be points in each row of interior points; hence, there must be points in each column of interior points.

Notice, last, that in our picture we need only fill in the remaining few parts on the right side. We notice that the number of squares needed to fill in this side is exactly the same number of points in each column of interior points (check this out on a few different rectangles if you don’t believe it).

Whew. Okay. So. We have interior points, each corresponding to a unit square. All but one of the top boundary points corresponds to a square; this is unit squares. Last, we have the number of points in each columns of the interior points corresponding to the remaining unit squares; this is unit squares.

At this point we want to write this nicely, so we’ll denote the TOTAL number of boundary points , and note that .

**The total area we found was **. If we rearrange this a bit, we notice that

This is exactly the statement of Pick’s theorem, and it is true more generally, as we’ll see.

### Triangles.

We’ll briefly cover one more example. Triangles are a natural figure to think about, so let’s see if Pick’s formula works here. First, right triangles are easy to work with, so let’s look at this one (ignore the blue part for now):

Notice that this triangle is exactly half of a rectangle (this is completed by the blue part), and it *just so happens to have no lattice points on the diagonal*. This last part is important so look again: none of the dots touch the diagonal of the red triangle (here, we are excluding the vertices of the triangle, which we don’t count as being on the diagonal). Some come close, but none lie on it. Of course, *this is not true in general, *but for now let’s just look at this triangle.

If we use the formula above for the rectangle, we get that the area is for the rectangle (in this specific case, and ), and half of this will be .

On the other hand, if we look at the interior points of the triangle, if none of them lie on the diagonal (like above) then we have *exactly half of what the rectangle had*, so our triangle has interior points; the number of boundary points will be *half of the number of boundary points of the rectangle, plus 1. *This can be seen as follows: if we consider all the points on the bottom boundary and all the points on the left boundary except for the top-most point, then this is exactly half of the boundary points of the rectangle. Hence, the number of boundary points we have for our triangle is .

Plugging this information into Pick’s formula (which, at this point, we only know is valid for the rectangle!) we obtain: . This is exactly the area we calculated before, giving us a verification that Pick’s formula works for right triangles with no lattice points on the diagonal.

How do we get around the condition that no lattice points should be on the diagonal? There is a relatively easy way to break up right triangles into other right triangles, none of which will have points on their diagonals. I’ll demonstrate with this picture below:

The idea is to just take the big triangle, draw some vertical and horizontal lines from the lattice points which lie on the diagonal, until you get smaller triangles (which will have no lattice points on the diagonal) and a bunch of rectangles. In this case, I first got two triangles (a small one on top, and a small one on the bottom right) and one little 4×3 rectangle in the lower-left. You then split the rectangle in half, which gives you some more triangles; if these triangles had lattice points on the diagonal, I would have repeated the process, getting even smaller triangles and rectangles. Because everything is nice and finite here, we don’t get infinite numbers of rectangles and triangles and such: this process will eventually stop. We apply the above to each and note that **this verifies Pick’s formula for any right triangle.**

But even right triangles are a bit restrictive. It would be nice if it were true for *any *triangle. Indeed, there is a similar process which decomposes triangles into right triangles. In fact, there are a number of such processes: for example, prove to yourself that every triangle has at least one altitude which is entirely contained in the triangle, and note that this splits the triangle into two right triangles. However you show it, **this verifies Pick’s formula for any triangle**.

### The Theorem.

**Theorem (Pick). **Given a lattice polygon with interior points and boundary points, the total area enclosed by the figure is given by .

The proof of this is done by induction. This is a more unusual type of induction since it requires us to induct on the number of triangles a figure is made up of. The difficult part has already been completed: we have shown that the formula holds for any triangle. We need a fact which I will not prove here:

**Fact: **Every polygon can be decomposed into a collection of triangles which only intersect at their boundaries. Moreover, lattice polygons can be decomposed into a collection of lattice triangles (triangles whose vertices are lattice points) which only intersect at their boundaries.

This process is called polygon triangulation. In layman’s terms, it means that you can take a polygon and cut it up into a bunch of triangles. Try it yourself for a few polygons!

Given all of this, let’s jump into the proof.

**Proof.** By induction. We have already proved the formula holds for triangles, so suppose the formula holds for all polygons which are able to be decomposed into or fewer triangles. Take such a polygon which is able to be decomposed into exactly triangles and "attach" a triangle to the boundary of such that the resulting figure is still a lattice polygon; call this new polygon .

For the triangle, denote its boundary points by and its interior points by ; similarly, for denote its boundary points by and its interior points by . Denote the common points that and share by .

For interior points, note that we have added the interior points of and together, but we also obtain those points which they share on their boundary, except for the two points on the vertex of the triangle which are shared; that is, .

For boundary points, we have to subtract points from ‘s boundary and points from ‘s boundary (for the same reason as in the previous paragraph). If we add together the boundary points of minus the common points and the boundary points of minus the common points, we will be counting those two points on the vertex of the triangle which are shared *two times* (why?), so we need to subtract 2 so that we only count these points once. Hence, .

At this point, let be the area of and be the area of ; we have that:

We note now that and from above, which gives us

This verifies Pick’s formula for our lattice polygon , and since any lattice polygon can be constructed this way (from finitely many triangles) this shows that Pick’s formula holds for *any* lattice polygon.

## Coefficients of Polynomials Corresponding to Sums of Powers of Natural Numbers Sum to 1.

### September 6, 2012

This post has a pretty weird title, but the problem is easy to state and uses a few interesting mathematical concepts. It’s worth going through. Let’s start with the basics.

**Problem 1. **Let . Show that is a polynomial for each and that the degree of the polynomial is .

Indeed, for example, we have that , as we learned in Calculus, and this is a polynomial of degree 2. Similarly, , which is a polynomial of degree 3. In the same respect, , which is a polynomial of degree 4.

The associated polynomials in this case are given by Faulhaber’s formula:

**Theorem (Faulhaber).** For we have .

This formula looks terrifying, but it is not hard to apply in practice. You may be wondering, though, what the ‘s in this formula stand for. These are the strange and wonderful Bernoulli numbers, of course! I always enjoy seeing these creatures, because they unexpectedly pop up in the strangest problems. There are a number of ways to define these numbers, one of which is to just write them out sequentially, starting with :

But in this case it is not so easy to guess the next value. The clever reader will notice that all of the odd numbered Bernoulli numbers (except the first) are zero, but other than that there does not seem to be a clear pattern. Fortunately, we can construct a *function *which *generates* the values as coefficients; we’ll call this function (surprise!) a *generating function.*

**Definition.** We define the sequence by

.

Notice that this will, in fact, generate the as coefficients times . Neat. In practice, you can use a program like Mathematica to compute for pretty large values of ; but, of course, there are lists available. We can now use Faulhaber’s formula above, which gives us (assuming we have proven that the formula holds!) that the sums of powers of natural numbers form polynomials of degree .

But something else happens that’s pretty interesting. Let’s look at some of the functions.

Look at the coefficients in each of these polynomials. Anything strange about them? Consider them for a bit.

**Problem. **Look at the coefficients. What do you find interesting about them? Note that, in particular, for a fixed , the coefficients of the associated polynomial sum to 1. Convince yourself that this is probably true (do some examples!) and then prove that it is true. Do this before reading the statements below.

**Anecdote. **I spent quite a while trying to write down the "general form" of a polynomial with elementary symmetric polynomials and roots to try to see if I could prove this fact using some complex analysis and a lot of terms canceling out. This morning, I went into the office of the professor to ask him about what it *means* that these coefficients sum up to 1. He then gave me a one-line (maybe a half-line) proof of why this is the case.

*Hint. What value would we plug in to a polynomial to find the sum of the coefficients? What does plugging in this value mean in terms of the sum?*

Seemingly unrelated is Gauss’ Mean Value Theorem, which is significantly cooler (in my opinion) than the standard mean value theorem of the reals. We will define it formally below, but it says the follwing: if is analytic (equivalent to complex differentiable) on some disk and is the center point of this disk, then the average of the values about the boundary of is equal to . That is, to find the value of , it suffices to integrate around a circle centered at and divide by (the amount of radians we pass through while integrating). This is really neat to think about since this tells us not only that, given there *exists* some point whose value is equal to the average of the sum of the values of lying on a circle, but, moreover, that this point is actually *the center of the circle.* This is intense stuff.

**Theorem (Gauss’ Mean Value Theorem). **Let be analytic on some closed disk which has center and radius . Let denote the boundary of the disk (that is, is the circle bounding ). Then we have that .

The proof of this theorem is pretty straight forward and uses the Cauchy integral formula and some easy substitution.

*Proof. *Note that we have . The equation of a circle with radius and center is given by where runs from 0 to (if you don’t believe me, plot some points!). Substituting this value into the integral and noting that we have that

as required.

Why bring up this neat little theorem? Well, by itself it doesn’t seem to be all that useful — when would we be able to calculate and sum up a whole ton of values of an analytic function surrounding a point, but not be able to find the point itself? But this little theorem packs some punch as a way of bounding certain values. In particular, it gives a neat proof of the Maximum Modulus Theorem. You might have guessed this from the title of this post.

First, let’s note something quickly.

**Lemma. **Given the assumptions in Gauss’ MVT, we have .

Be careful here in thinking that this should be an *equality*; we are now looking at the *modulus *of our value, and the *modulus* of each point on the circle. But this lemma comes almost for free:

*Proof. *We have by using Gauss’ MVT and simply taking the norm of both sides. Note that

whence the inequality above.

This lemma tells us that the value of the center of any circle is bounded by the sum of the modulus of the values of the points of that circle. We’ll see why this is the crucial bound we’ll need in the MMT’s proof below.

**Theorem (Maximum Modulus Theorem). **Given analytic on some domain , if is non-constant on then the maximum value of for will occur on the boundary of . (Alternatively, if is maximized by some value not on the boundary of , then is constant on .)

*Proof. *We’ll split this into two steps. The first step is for the specific case that is a closed disk and our maximum modulus occurs at the center of this disk. The second step will be to get some arbitrary space and construct some closed disks in the interior of and "piece these together" to show that is constant on all of .

**Step 1:*** *Let’s suppose that our maximum modulus is at the center point of , which we will call ; that is, we are supposing that for every . Since is an interior point, we have that there is some -ball about (that is, a ball of radius ) which is completely contained in . Let the denote the circle of radius centered at the point . By our second lemma above we have that

.

BUT, using that for every we have that

.

Stringing these inequalities together and suggestively re-writing , we have that

and by subtracting,

but since the integrand is always positive or zero (why?) it must be the case that

or, in other words, . Since was arbitrary, we conclude that for every .

**Step 2: **Now suppose we have some arbitrary domain and is analytic on all of . I will hand-wave a bit here, but you can fill in the details. Note that a domain (in this context) necessarily means *open and* *path-connected* (and, in fact, it usually denotes a simply connected *open* subset of ). Suppose that our maximum modulus occurs at some point on the interior of which we will call . Now, given *any other point * we have some path from to which is completely contained in . In fact, we can make this path a finite *polygonal* path; that is, a path made out of a finite number of straight lines piecewise-connected together; we will denote this , where the is the line with endpoints and . I will let you work the details out here, but it can be done.

Now, the polygonal line might be right next to a boundary, and we don’t want to accidentally hit it when we start making balls around points, so let denote whichever is smaller: the distance from the polygonal line to the boundary, or 1. So, if your polygonal line is right next to the boundary, we might need to make pretty small; but if not, we can just let it be whatever we want, so we might as well make it 1. Note that since is open, no point on the polygonal path should be on the boundary. Now, let’s break up our polygonal path into another polygonal path where each has length less than . It is clear we can do this just by partitioning each straight line in our original path so that their lengths are appropriately small; note, we still only have a finite number of endpoints . That’s important.

(In the picture above, I’ve made the original endpoints blue and then partitioned our polygonal path with the new red endpoints to make each line segment less than .)

Now everything is going to fall pretty quickly, so keep on your toes. First, make a disk of radius (as defined above) around each and call it . Now note that, by our previous step, since our maximum modulus occurs at , we have for every point . But is in !

(This picture is not drawn to scale because I am not a good artist; this is illustrating being inside the circle .)

So now is also of maximum modulus (since was) and so for every point in . Continue this and we will obtain . Since was an arbitrary point, it follows that for *every* . Hence, if attains a maximum modulus on the interior of some set , then it is constant. This implies directly that any non-constant analytic function achieves its maximum modulus on the boundary.

## Existence of Spanning Trees in Finite Connected Graphs.

### March 5, 2012

[Note: It’s been too long, mathblog! I’ve been busy at school, but I have no plans to discontinue writing in this thing. For longer posts, you’ll have to wait until I have some free time!]

You’ve probably seen a graph before: they’re collections of vertices and edges pieced together in some nice way. Here’s some nice examples from wikipedia:

One theorem that I constantly run into (mainly because it’s relevant to homology) is the following:

**Theorem.** Given some connected finite graph , there exists a spanning tree of .

Note that such a tree is not generally unique. You might want to find one or two in the graphs above to prove this to yourself! Nonetheless, even though this proof seems like it would be a bit intimidating (or, at least, by some sort of tedious construction), it’s actually quite nice. Let’s go through it.

*Proof. * We’ll prove this by the number of cycles in . If has no cycles then it is, itself, a tree; hence, is its own spanning tree. Suppose now that any graph with cycles has a maximum spanning tree. Given some graph with cycles, take any one of these cycles and delete any edge (but not deleting the vertices!). This reduces the number of cycles by (at least) one, so there is a spanning tree by induction. In fact, since we have not deleted any vertices, we may replace the edge we removed and this tree will still span the graph.

A couple of questions about this for the eager reader. Where did we use connectedness? Where did we use finite-ness? What if we were given an infinite graph? Is there a nice notion of cycles for this kind of graph? Draw some pictures and think about it!

## Uncountable Subset A of [0,1] with A – A empty.

### December 20, 2011

I’m going through a few books so that I can start doing lots and lots of problems to prepare for my quals. I’ll be posting some of the “cuter” problems.

Here’s one that, on the surface, looks strange. But ultimately, the solution is straightforward.

**Problem.** Find an uncountable subset such that has empty interior.

## 11, 111, 1111, … Not a Square.

### December 18, 2011

I just saw this problem in a book of algebra problems, and I thought it was nice given one of the previous posts I had up here.

**Problem. **Show that 11, 111, 1111, 11111, … are not squares.

You ought to think about this for a bit. I started by supposing they were squares and attempting to work it out like that; unfortunately, there are some strange things that happen when we get bigger numbers. But. You should see a nice pattern with the last two digits. Click below for the solution, but only after you’ve tried it!

## Sequence which has countably many convergent subsequences.

### August 22, 2011

This post is about a question I just thought about that I thought had a pretty neat solution. The values we consider below are all in the Reals.

So, we’re all familiar with sequences which have subsequences which converge to two different values; for example, take which has a subsequence converging to 1 and a subsequence converging to -1. Similarly, we can construct a sequence containing subsequences converging to three different values in a similar way; for example: . Indeed, for any finite number, we can make a sequence containing subsequences converging to that number of different values.

**The obvious next question is: can a sequence have subsequences which converge to a countable number of different values? What about an uncountable number of different values?**

For the former question, I thought of this solution. First, enumerate your countable list of distinct values (think of the integers or the rationals, for example) as . To construct the sequence, we do the following:

- Let .
- Let if is not some positive power of a prime (for example, if is 15, 21, 35, 100, etc.)
- For where is a positive power of a prime, we do the following: If for and is the -th prime (so , , , and so on) then set where is the -th element from your list above.

How does this actually work? Let’s take a simple example. Let’s let our countable set be the set of natural numbers. This is easily ordered as and so, for this example, , , , and so forth. Our sequence is now:

and if we kept going like this,

You kinda see the pattern going on here. The 1 will appear infinitely many times, but the spaces between it grow exponentially. Same for 2, 3, 4, and so on. Thus, this sequence has the subsequence for any which obviously will converge to .

Do you have another neat way to do this? Encoding this in the primes was my first thought, but you never know!

As for uncountable, my guess is no. Of course, thinking of the sequence as a SET, we would get the reals from the rationals which is uncountable, but as a SEQUENCE it is not so trivial I feel. I don’t want to spend too much time thinking about this (as analysis is calling me!) but I’m sure the proof isn’t too crazy.

**Edit: Of course, a nice example exists where a sequence has subsequences converging to an uncountable number of distinct point. In fact, two nice examples exist, and both were provided to me by Brooke (as usual!).**

**Uncountable Example 1:**

It was a bit difficult for me to see the first one, but after doing a nice little thought experiment everything became much clearer. Here it is:

Let be an enumeration of the rationals. The claim is that there is actually a subsequence which converges to *any real number*. Think about it for a minute.

Here’s a sketch for the proof. Take your favorite real number. Let’s start easy and just say we want a subsequence which converges to . Let be the first element in our enumeration of the rationals above which is a distance less than 1 away from ; let’s say it’s . Now let be the first element in our enumeration which is *after * and which is a distance less than away from .

After that, you just keep picking the "next" element from the list that’s away from . There must be one due to the density of the rationals in the reals (or, assume not and see what happens!). We end up with as a subsequence of our enumeration of the rationals which converges to . Obviously, replacing with any other real number works exactly the same.

**Uncountable example 2:**

This was one that Brooke presented to me that I liked quite a bit since it’s really easy to state. Here’s the sequence (broken up by lines for clarity):

This converges to any real number in . To see this is easy: decimal expand your chosen real number and pick the element "from each line" (as I’ve written it above) which is closest to it. They’ll be at most away (for some suitable depending on the line you’re on) and thus a subsequence exists which converges to whatever element you picked.

## Jordan’s Lemma.

### August 9, 2011

[This post is for those of you who are already comfy with doing some basic contour integrals in complex analysis.]

So you’re sitting around, evaluating contour integrals, and everything is fine. Then something weird comes up. You’re asked to evaluate an integral that looks like

for is continuous. Eek. Don’t panic though, because Camille Jordan’s gonna help you out.