## Greatest Common Divisors; Euclidean Algorithm.

### January 5, 2013

Recently, I’ve been learning to program in a new language and have been doing Project Euler problems for practice — of course, as the name suggests, most of these problems deal explicitly with problems which must be solved (efficiently) with mathematical techniques. Two of the most common algorithms I’ve used are: Prime Testing, GCD Finder. I’ll post about the former later, but the latter is an interesting problem in its own right:

**Initial Problem**. Given two natural (positive whole) numbers called , can we find some other natural number that divides both of them?

This problem is a first step. It’s nice to be able to write numbers as a multiple of some other number; for example, if we have 16 and 18, we may write them as and , thus giving us some insight as to the relationship between these two numbers. In that case it may be easy to see, but perhaps if you’re given the numbers 46629 and 47100, you may not realize right away that these numbers are and respectively. This kind of factorization will reveal "hidden" relationships between numbers.

So, given two numbers, how do we find if something divides both of them — in other words, how do we find the *common divisors *of two numbers? If we think back to when we first began working with numbers (in elementary school, perhaps) the first thing to do would be to note that 1 divides *every number*. But that doesn’t help us all that much, as it turns out, so we go to the next number: if both numbers are even, then they have 2 as a common factor. Then we "factor" both numbers by writing them as and then attempt to keep dividing things out of the *something*. We then move onto 3, skip 4 (since this would just be divisible by 2 twice), go onto 5, then 7, then…and continue for the primes. This gives a *prime *factorization, but we have to note that if, say, 2 and 5 divide some number, then so does 10. These latter divisors are the *composite *factors.

This seems excessive, but it is sometimes the only way one can do it.

**Anecdote!: **On my algebra qualifying exam, there was a question regarding a group of order 289 which required us to see if 289 was prime or not; if not, we were to factor it. We were not allowed calculators, so what could we do? Try everything. Note that we only need to try up to the square root of the number (which we could estimate in other ways), but it’s still a number of cases. If you check, none of the following numbers divide into 289: 2, 3, 5, 7, 11, 13. At this point, I was about to give up and call it a prime, but, for whatever reason, I decided to try 17. Of course, as the clever reader will have pointed out, . It is not prime. There was, luckily, only one student who thought it was prime, but it points out how the algorithm above is not entirely trivial if one does not have access to a computer or calculator.

Once we have a common divisor, or a set of common divisors, a natural thing to want to do is to find the *biggest* (we already have the smallest, 1) since in this way we can write our numbers with the largest common factor multiplied by some other number. It will, in effect, make things prettier.

**Real Problem.** Find the *greatest* divisor which is common to two natural numbers, .

If you were just learning about this kind of thing, you may spout out the following solution: find *all *of the common divisors, then pick the greatest. While this is not especially efficient, it *is* a solution. Unfortunately, even for small numbers, this gets out of hand quickly. For example, 60 and 420 have the following common divisors: 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60. This takes a while to compute by hand.

Even if we were to find prime factors, this would be and , which gives us that they share a number of prime factors. A bit of thinking gives us that we take all of the prime factors they "share" and multiply them together to get the greatest common divisor. This is another potential solution which is much faster than simply listing out all of the common divisors. Unfortunately, this falls prey to the same kind of trap that other prime-related problems do: it is, at times, especially difficult to factor large composite numbers. For example, the "reasonably small" number 49740376105597 has a prime factorization of ; this is not at all efficient to factor if one does not have a computer or a specialized calculator with a factoring algorithm on it. As a mean joke, you may ask your friend to factor something like 1689259081189, which is actually the product of the 100,000th and 100,001st prime — that is, they would need to test 99,999 primes before getting to the one which divides the number. If they divided by one prime per second (which is quite fast!) this would take them 1 day, 3 hours, and 46 minutes. Not especially effective, but it *will *eventually get the job done.

**Real Problem, With Efficiency: **Find the greatest divisor which is common to two natural numbers, , but do so in an efficient manner (we’ve all got deadlines!).

We need to sit down and think about this now. We need an entirely new idea. We note, at least, that for the two numbers that one of them must be larger than the other (or else the problem is trivial). One thing to try would be to see if the smaller one goes into the larger one (for example, above we had 60 going into 420, which gave us the easy solution that 60 must be the greatest common divisor). If not, maybe we can see how much is left over. That is, if is the larger number,

where here is the number of times goes into without exceeding it, and is the "remainder"; if it’s equal to 0, then evenly divides into , and otherwise it is less than (or else we could divide an additional into ).

Using this, if , we may write ; this means that, in particular, divides and , so it is a factor of and of . But it may not actually be a factor of ; so let’s see how many times it goes into . Using the same process…

and by rearranging, we have that is divisible by . So, is divisible by , but we aren’t sure if is divisible by …if it were, we would be able to say that was a common divisor of and (why?). That’s *something* at least.

The cool thing about our algorithm here is that, because we have that either and we’re done with the algorithm, or and we may form a new equation ; this equation has, on the left-hand side, the number which is less than the previous equation’s left-hand side, which was . Continuing this process, we will have on the left-hand side, each of which is less than the one which came before it. Because for any of the remainders, *eventually* it will become 0 (why?) and this algorithm will terminate. That is, we will have found *some * which is a common divisor for both ; specifically, it will be the such that (or, it may simply be if divides ).

This algorithm, called the *Euclidean Algorithm,* actually does more "automatically": it not only finds a common divisor, but actually finds the *greatest common divisor *of , which, from now on, we will denote . The "proof" of this is simply noting that (we noted this above without making reference to the gcd, but the reader should attempt to go through all the same steps using the idea of the gcd).

So. If you have two natural numbers, , you divide them, find the remainder, write the equation, then continue as above until you get a 0 remainder. Then you pick the remainder directly before you got 0 as your gcd (or, you pick the smaller number if one number divides the other). Pretty simple algorithm, but is it efficient?

Without going into formal "efficiency" definitions, "yes", it is quite efficient. To prove it, let’s take an "average" example using the "large" numbers 1337944608 and 4216212. We note that (by pen and paper, or by using a standard calculator) that

1337944608 = 317(4216212) + 1405404.

Next, we note that

4216212 = 3(1405404) + 0

which instantly gives us the solution . That’s pretty awesome. Note that this was an especially quick trial, but even the "worst" ones are relatively quick.

**Unexpected Corollary!: **For natural numbers, if then there exists integers such that .

This is more useful than you might think at first glance, and we’ll get into why in a later post, but what’s nice about this corollary is that it comes "for free" from the Euclidean algorithm. Note that, since divides , it suffices to prove this corollary for where have . The proof uses induction on the number of steps of the Euclidean algorithm for those numbers, but for those of you who are more experienced and know modular arithmetic, you may enjoy the following simple proof:

*"Clever" Proof of the Corollary: * Let (for equality, the proof is easy). We will only care about remainders in this proof, so we will look at some numbers modulo . Consider

Note there are exactly remainders here and that the remainder never occurs (since are relatively prime). Suppose that for each of the ; that is, the remainder 1 does not ever show up in this list. By the pigeon-hole principle (as there are remainders but only possible values for the remainders) we must have that for some . That is, we have

which implies

but this is impossible, since it implies that either or is some integer multiple of , but and we have assumes are relatively prime. Hence, the remainder must occur. That is, for some and

But what does this mean? It means that there is some integer such that . To make this prettier, let and we find that there exists integers such that , as required. .

Pretty slick, no?

## Category Theory: Mono, Epi, but not Iso?

### January 2, 2013

This post will require some very basic knowledge of category theory (like, what a category is, and how to make a poset into a category). For everything below, I will be a bit informal, but I will essentially mean that are objects in a category, and is some morphism between them which is also in the category.

The "natural" extension of the notion of a surjective map (in, say, the category of sets) is

**Definition**. A map is an *epimorphism* if, for each object and map we have that if then .

You should prove for yourself that this is, in fact, what a surjective map "does" in the category of sets. Pretty neat. Similarly, for injective maps (in, say, the category of sets) we have the more general notion:

**Definition**. A map is a *monomorphism* if, for each object and map we have that if then .

Again, you should prove for yourself that this is the property that injective mappings have in the category of sets. Double neat. There is also a relatively nice way to define an isomorphism categorically — which is somewhat obvious if you’ve seen some algebraic topology before.

**Definition**. A map is an *isomorphism* if there is some mapping such that and , where denote the identity morphism from the subscripted object to itself.

Now, naively, one might think, "Okay, if I have some certain kind of morphism in my category (set-maps, homomorphisms, homeomorphisms, poset relations, …) then if it is an epimorphism and a monomorphism, it should automatically be an isomorphism." Unfortunately, **this is not the case.** Here’s two simple examples.

**Example (Mono, Epi, but not Iso)**. The most simple category for which this works is the category **2**, which I’ve drawn below:

There are two objects, and three morphisms, the identites and the morphism . First, prove to yourself that this is actually a category. Second, we note that is an epimorphism: the only map from is the identity, and there is no mapping from , so the property trivially holds. Third, we note that is a monomorphism for the exact same reason as before. Last, we note that is *not *an isomorphism: we would need some which satisfied the properties in the definition above…but, there *is no map* from . Upsetting! From this, we must conclude that cannot be an isomorphism despite being a mono- and epimorphism.

**Similar Example (Mono, Epi, but not Iso). **Take the category , the natural numbers with morphisms as the relation . Which morphisms are the monomorphisms? Which morphisms are the epimorphisms? Prove that the *only *isomorphisms are the identity morphisms. Conclude that there are a whole bunch of morphisms which are mono- and epimorphisms but which are not isomorphisms.

## The Burnside Theorem and Counting, part I.

### December 8, 2012

Let’s talk about Burnside’s theorem. First, let me note that there are *two *results commonly called "Burnside’s Theorem." The first one that I learned (which we won’t be discussing in this post) was:

**Theorem (Burnside)**. If is a finite group of order where are non-negative integers and where are primes, then is solvable.

The second one is also a group theoretical result, but a bit more combinatorial-feeling. In some books (and, apparently, Wikipedia) this second result is called Burnside’s Lemma. As noted in the Wikipedia article, this theorem was not even due to Burnside, who quoted the result from Frobenius, who probably got it from Cauchy.

Let’s get some definitions down. As usual, we’ll denote the *order* of the group by , and our groups will *all be finite in this post*. If we have a group which acts on a set , then given some fixed we define the set ; this is, of course, the fixed points in when acted on by the element ; for the remainder of the post, we will simply write with the set implied. Remember, when a group acts on , the "product" will sit inside of , and we write the action as for an element acting on an element . The *orbit* of a point when acted on by is given by ; we’ll denote this , though this is not standard notation. The orbit, essentially, is all of the possible values that you can get to by acting on with elements in your group.

One thing to note, also, is that orbits are *pairwise* *disjoint. *You should prove this to yourself if you haven’t already, but the idea is like this: if are orbits of elements in then suppose ; then there is some such that , but this implies which implies the orbits are identical (why?). Hence, each element of is in *exactly one orbit*.

We need one more result before we can sink our teeth into Burnside. Remember the fixed point set above? This was all of the elements such that for some . There’s a similar notion called a *Stabilizer*, denoted ; this is saying that we first fix , and then look at all the elements which stabilize it. These definitions are pretty similar feeling (almost like family!) and, in fact, there is a nice relation between the two:

**Notation. **Let denote the set of orbits of when acted on by ; when is a group and is a subgroup this is the same as a quotient.

**Theorem (Orbit-Stabilizer Theorem).** There is a bijection between and .

That is, if we act on by elements of which fix , then we will have the same number of elements as the orbit of . This might seem a little confusing at first, but if you work through it, it’s not so weird.

*Sketch of the Proof. *(Skip this if you’re not comfortable with all this notation above; just go down to the next theorem.) Here, we want to show a bijection. Notice that $G/G_{x}$ is the set of cosets for . We claim that the mapping which sends is well-defined, injective and surjective (but not a homomorphism). First, well defined: if then $latex $hk^{-1}\in G_{x}$ which means that . This implies, after some manipulation, that , which means these elements are identical in . Second, surjectivity is clear. Last, if in the orbit, then which implies which gives ; hence this map is injective. This gives us that our map is bijective.

One immediate corollary is that ; that is, the number of elements in the orbit of is the same as the number of elements in divided by the number of elements in which fix . Think about this for a minute.

## Road to the Proof.

Okay. Now, let’s think about something for a second. What is the sum

telling us? This is the number of elements in which are fixed by some ; but there might be some overlap, since if and , then will be counted twice: one as an element of and once as an element of . But how much overlap is there? This is an innocent seeming question, and you might think something like, "Well, depends on how much stuff stabilizes each .", and this is pretty close to the point.

First, note that

which is just the long way to write out this sum; but, the nice part about that is, we can now think about this as all of the elements of which are stabilized by some (why?). Then,

If you don’t see this, you should prove to yourself why they’re the same sum (why is each element counted in the left-hand side also counted in the right-hand side?). Now, by the Orbit-Stabilizer theorem above, this right-hand sum becomes pretty nice. Specifically,

where we noted in the last equality that is a constant, so we may pull it out of the sum.

Recalling that denotes the number of orbits, we have that if we take a single orbit (call it ) we will be adding up exactly times (since the sum is taken over each so, in particular, over each $x\in A$); hence, we will add for each orbit we have in . That is;

Putting this all together, we have

We clean it up a bit, and state the following:

**Theorem (Burnside’s). **For a finite group and a set , with notation as above, we have .

That is, the number of orbits is equal to the sum, over , of the elements of fixed under , averaged by the number of elements in . Kind of neat.

Next time, we’ll talk about applications!

## A Very Basic Introduction to the Sylow Theorems.

### May 18, 2012

[Note: It’s been a while! I’ve now completed most of my pre-research stuff for my degree, so now I can relax a bit and write up some topics. This post will be relatively short just to “get back in the swing of things.”]

In Group Theory, the big question used to be, “Given such-and-such is a group, how can we tell which group it is?”

The Sylow Theorems (proved by Ludwig Sylow, above) provide a really nice way to do this for finite groups using prime decomposition. In most cases, the process is quite easy. We’ll state the theorems here in a slightly shortened form, but you can read about them here. Note that subgroup which is of order for some is unsurprisingly called a -subgroup. A -subgroup of maximal order in is called a *Sylow *-subgroup.

**Theorem (Sylow)**. Let be a group such that for . Then,

- There exists at least one subgroup of order .
- The Sylow -subgroups are conjugate to one-another; that is, if are Sylow -subgroups, then there is some such that . Moreover, for all , we have that is a Sylow -subgroup.
- The number of Sylow -subgroups of , denoted by , is of the form . In other words, divides .

This first part says that the group of Sylow -subgroups of is not empty if divides the order of . Note that this is slightly abbreviated (the second part is actually more general, and the third part has a few extra parts) but this will give us enough to work with.

**Problem: **Given a group for prime and , is ever simple (does it have any nontrivial normal subgroups)? Can we say explicitly what is?

We use the third part of the Sylow theorems above. We note that and , but so this immediately implies that (why?). So we have one Sylow -subgroup; let’s call it . Once we have this, we can use the second part of the Sylow theorem: since for each we have is a Sylow -subgroup, but we’ve shown that is the only one there is! That means that ; this says is normal in . We have, then, that isn’t simple. Bummer.

On the other hand, we can actually say what this group is. So let’s try that. We know the Sylow -subgroup, but we don’t know anything about the Sylow -subgroups. We know that and , but that’s about it. There are two possibilities: either or .

For the first case, by using the modular relation, if does not divide then this forces ; this gives us a unique normal Sylow -subgroup . Note that since the orders of our normal subgroups multiply up to the order of the group, we have ; in other words, .

For the second case, . We will have a total of subgroups of order and none of these are normal. This part is a bit more involved (for example, see this post on it), but the punch line is that it will be the cyclic group .

I’ll admit that the last part is a bit hand-wavy, but this should at least show you the relative power of the Sylow theorems. They also come in handy when trying to show something either does or does not have a normal subgroup. Recall that a simple group has no nontrivial normal subgroups.

**Question. **Is there any simple group with ?

I just picked this number randomly, but it works pretty well for this example. We note that . Let’s consider, for kicks, . We know must divide and it must be the case that ; putting these two facts together, we get . This immediately gives us a normal subgroup of order 11, which implies there are *no simple groups *of order 165.

**Question. **Is there any simple group with ?

Alas, alack, you may say that 777 is too big of a number to do, but you’d be *dead wrong*. Of course, . Use the same argument as above to show there are no simple groups of this order.

**Question. **Is there any simple group with ?

Note that so we need to do a little work, but not much. Just for fun, let’s look at . We must have that it is 1 modulo 7 and it must divide . Hm. A bit of thinking will give you that , which gives us the same conclusion as above.

Of course, there are examples where this doesn’t work nicely. Think about the group of order 56, for example. In order to work with these kinds of groups, one must do a bit more digging. We will look into more of this later.

## Tensor Products: A few explicit calculations.

### December 17, 2010

## Reader Beware.

I planned to do a post about tensor products (what they are, why we should care, what we do with them, etc.) but because I’m not comfortable with all of that quite yet, I’m going to assume you know what tensor products are, and do a few explicit calculations. So, in short, if you don’t already know what tensor products are, don’t read this post.

Our notation will be as follows: is a field, is a commutative ring with , and will denote the tensor product of modules over a ring . As usual, will denote the polynomials in with coefficients in .

**(Note: **My thanks to Brooke, who pointed out that I kept writing "+" when I meant "." I hope I’ve not made this error elsewhere, as tensors are "pretty different" from standard addition.)

## Algebras, or When Can I Multiply Stuff in My Module?

### October 28, 2010

## Wordy Introduction, Motivation.

When you first start high school algebra, the big thing is FOIL-ing, right? Factoring and factorizing quadratics. When you get to calculus, the big things are derivatives and integrals. Then when you get to college and start doing math, things get a little tougher. We start learning about abstract structures, and these become increasingly specific and increasingly complex as we go along.

## What the Hell is a Module?

### October 12, 2010

This post is going to be a gentle introduction to what a module is. It isn’t hard, but, for me, modules were sort of just “thrown in” with a whole bunch of defining properties and no motivation for why I should care about them. I’m hoping to motivate them at least a little bit so that you feel more comfortable thinking and working with them!

## Applying Lagrange!: Groups of Prime Orders.

### June 29, 2010

Little post. Because I love doing things that comments tell me to do, we’re going to use Lagrange to prove a neato theorem. Now, normally, if I told you, “Hey, guy, I’ve got a group with elements. What one is it?” you’d probably be unable to tell me! Why? Lots of different groups have the same order! For example, if we’re talking about order 8, are we talkin’ ? Are we talkin’ ? Are we talkin’ ? I just don’t know!

How could I have been so naive? How could I have been so myopic? How is it that I thought I could just wrap up group theory without mentioning Lagrange’s theorem? How could I let this topic die out not with a bang but with a whimper?

Let us, for old time’s sake, state one more theorem for the group theory primer — and this one’s a biggie! Remember how division is defined for rational numbers? sort of means “split into little piles of size , and is how many piles there are.” For example, if we have 12 batteries and put them into piles of 3 batteries each, how many piles do we have? This doesn’t take a rocket scientist.

Last time we talked about a whole lot of stuff. We did homomorphisms, isomorphisms, and talked about the first ismorphism theorem. What did this one state? It states that for are groups and is a homomorphism, then we have that , or, in other words, the quotient of with the kernel of the map is equal to the image of the map. This makes sense if you think about it: we’re kind of condensing everything that goes to 0 when we map it away from and we say that these elements ultimately don’t matter in the image — but, because of the nice properties of homomorphisms, a lot of other elements map onto each other, too.

Today, we’re going to discuss the final two isomorphism theorems (which don’t come up as often, but they’re nice) and conclude with one of the most used theorems in elementary abstract algebra: Cauchy’s Theorem.