Recently, I’ve been learning to program in a new language and have been doing Project Euler problems for practice — of course, as the name suggests, most of these problems deal explicitly with problems which must be solved (efficiently) with mathematical techniques.  Two of the most common algorithms I’ve used are: Prime Testing, GCD Finder.  I’ll post about the former later, but the latter is an interesting problem in its own right:

 

Initial Problem.  Given two natural (positive whole) numbers called m, n, can we find some other natural number that divides both of them?

 

This problem is a first step.  It’s nice to be able to write numbers as a multiple of some other number; for example, if we have 16 and 18, we may write them as 8\times 2 and 9\times 2, thus giving us some insight as to the relationship between these two numbers. In that case it may be easy to see, but perhaps if you’re given the numbers 46629 and 47100, you may not realize right away that these numbers are 99\times 471 and 100\times 471 respectively.  This kind of factorization will reveal "hidden" relationships between numbers.  

So, given two numbers, how do we find if something divides both of them — in other words, how do we find the common divisors of two numbers?  If we think back to when we first began working with numbers (in elementary school, perhaps) the first thing to do would be to note that 1 divides every number.  But that doesn’t help us all that much, as it turns out, so we go to the next number: if both numbers are even, then they have 2 as a common factor.  Then we "factor" both numbers by writing them as 2\times\mbox{ something} and then attempt to keep dividing things out of the something.  We then move onto 3, skip 4 (since this would just be divisible by 2 twice), go onto 5, then 7, then…and continue for the primes.  This gives a prime factorization, but we have to note that if, say, 2 and 5 divide some number, then so does 10.  These latter divisors are the composite factors.

This seems excessive, but it is sometimes the only way one can do it. 

Anecdote!: On my algebra qualifying exam, there was a question regarding a group of order 289 which required us to see if 289 was prime or not; if not, we were to factor it.  We were not allowed calculators, so what could we do?  Try everything.  Note that we only need to try up to the square root of the number (which we could estimate in other ways), but it’s still a number of cases.  If you check, none of the following numbers divide into 289: 2, 3, 5, 7, 11, 13.  At this point, I was about to give up and call it a prime, but, for whatever reason, I decided to try 17.  Of course, as the clever reader will have pointed out, 289 = 17\times 17.  It is not prime.  There was, luckily, only one student who thought it was prime, but it points out how the algorithm above is not entirely trivial if one does not have access to a computer or calculator. 

 

Once we have a common divisor, or a set of common divisors, a natural thing to want to do is to find the biggest (we already have the smallest, 1) since in this way we can write our numbers with the largest common factor multiplied by some other number.  It will, in effect, make things prettier.

 

Real Problem.  Find the greatest divisor which is common to two natural numbers, m, n.

 

If you were just learning about this kind of thing, you may spout out the following solution: find all of the common divisors, then pick the greatest.  While this is not especially efficient, it is a solution.  Unfortunately, even for small numbers, this gets out of hand quickly.  For example, 60 and  420 have the following common divisors: 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60.  This takes a while to compute by hand. 

Even if we were to find prime factors, this would be 60 = 2^2 \times 3\times 5 and 420 = 2^2 \times 3 \times 5\times 7, which gives us that they share a number of prime factors.  A bit of thinking gives us that we take all of the prime factors they "share" and multiply them together to get the greatest common divisor.  This is another potential solution which is much faster than simply listing out all of the common divisors.  Unfortunately, this falls prey to the same kind of trap that other prime-related problems do: it is, at times, especially difficult to factor large composite numbers.  For example, the "reasonably small" number 49740376105597 has a prime factorization of 2741 \times 37813 \times 479909; this is not at all efficient to factor if one does not have a computer or a specialized calculator with a factoring algorithm on it.  As a mean joke, you may ask your friend to factor something like 1689259081189, which is actually the product of the 100,000th and 100,001st prime — that is, they would need to test 99,999 primes before getting to the one which divides the number.  If they divided by one prime per second (which is quite fast!) this would take them 1 day, 3 hours, and 46 minutes.  Not especially effective, but it will eventually get the job done.

 

Real Problem, With Efficiency: Find the greatest divisor which is common to two natural numbers, m,n, but do so in an efficient manner (we’ve all got deadlines!).

 

We need to sit down and think about this now.  We need an entirely new idea.  We note, at least, that for the two numbers m,n that one of them must be larger than the other (or else the problem is trivial).  One thing to try would be to see if the smaller one goes into the larger one (for example, above we had 60 going into 420, which gave us the easy solution that 60 must be the greatest common divisor).  If not, maybe we can see how much is left over.  That is, if m is the larger number,

m = a_{1}n + r_{1}

where here a_{1} is the number of times n goes into m without exceeding it, and r_{1} is the "remainder"; if it’s equal to 0, then n evenly divides into m, and otherwise it is less than n (or else we could divide an additional n into m). 

Using this, if r_{1}\neq 0, we may write m - a_{1}n = r_{1}; this means that, in particular, r_{1} divides m and a_{1}n, so it is a factor of m and of a_{1}n.  But it may not actually be a factor of n; so let’s see how many times it goes into n.  Using the same process…

n = a_{2}r_{1} + r_{2}

and by rearranging, we have that n - a_{2}r_{1} is divisible by r_{2}.  So, n is divisible by r_{2}, but we aren’t sure if r_{1} is divisible by r_{2}…if it were, we would be able to say that r_{2} was a common divisor of m and n (why?).  That’s something at least. 

The cool thing about our algorithm here is that, because a_{1}n + r_{1} = m we have that either r_{1} = 0 and we’re done with the algorithm, or r_{1} > 0 and we may form a new equation n = a_{2}r_{1} + r_{2}; this equation has, on the left-hand side, the number n which is less than the previous equation’s left-hand side, which was m.  Continuing this process, we will have r_{1}, r_{2}, \dots on the left-hand side, each of which is less than the one which came before it.  Because r_{i} \geq 0 for any of the remainders, eventually it will become 0 (why?) and this algorithm will terminate.  That is, we will have found some r_{i} which is a common divisor for both n, m; specifically, it will be the r_{i}\neq 0 such that r_{i+1} = 0 (or, it may simply be n if n divides m).

This algorithm, called the Euclidean Algorithm, actually does more "automatically": it not only finds a common divisor, but actually finds the greatest common divisor of m,n, which, from now on, we will denote \gcd(m,n).  The "proof" of this is simply noting that \gcd(m,n) = \gcd(n,r_{1}) = \gcd(r_{1},r_{2}) = \cdots = \gcd(r_{i-1},r_{i}) = r_{i} (we noted this above without making reference to the gcd, but the reader should attempt to go through all the same steps using the idea of the gcd). 

So.  If you have two natural numbers, n,m, you divide them, find the remainder, write the equation, then continue as above until you get a 0 remainder.  Then you pick the remainder directly before you got 0 as your gcd (or, you pick the smaller number if one number divides the other).  Pretty simple algorithm, but is it efficient?

Without going into formal "efficiency" definitions, "yes", it is quite efficient.  To prove it, let’s take an "average" example using the "large" numbers 1337944608 and 4216212.  We note that (by pen and paper, or by using a standard calculator) that

1337944608 = 317(4216212) + 1405404.

Next, we note that

4216212 = 3(1405404) + 0

which instantly gives us the solution \gcd(4216212, 1337944608) = 1405404.  That’s pretty awesome.  Note that this was an especially quick trial, but even the "worst" ones are relatively quick. 

 

Unexpected Corollary!:  For n,m natural numbers, if \gcd(n,m) = k then there exists integers a,b such that an + bm = k.

 

This is more useful than you might think at first glance, and we’ll get into why in a later post, but what’s nice about this corollary is that it comes "for free" from the Euclidean algorithm.  Note that, since k divides n, m, it suffices to prove this corollary for an + bm = 1 where n, m have \gcd(n,m) = 1.  The proof uses induction on the number of steps of the Euclidean algorithm for those numbers, but for those of you who are more experienced and know modular arithmetic, you may enjoy the following simple proof:

 

"Clever" Proof of the Corollary: Let m > n (for equality, the proof is easy).  We will only care about remainders in this proof, so we will look at some numbers modulo m.  Consider

 

r_{1} = n\mod m

r_{2} = 2n\mod m

\vdots

r_{m-1} = (m-1)n\mod m

 

Note there are exactly m-1 remainders here and that the remainder 0 never occurs (since m,n are relatively prime).  Suppose that r_{i} \neq 1 for each of the i; that is, the remainder 1 does not ever show up in this list.  By the pigeon-hole principle (as there are m - 1 remainders but only m -2 possible values for the remainders) we must have that r_{i} = r_{j} for some i\neq j.  That is, we have

in \mod m = jn \mod m

which implies

(i-j)n\mod m = 0

but this is impossible, since it implies that either m = 0 or n is some integer multiple of m, but m > 0 and we have assumes m,n are relatively prime.  Hence, the remainder 1 must occur.  That is, r_{c} = 1 for some c and

cn \mod m = 1.

But what does this mean?  It means that there is some integer a such that am - cn = 1.  To make this prettier, let b = -c and we find that there exists a,b integers such that am + bn = 1, as required.  \Box

 

Pretty slick, no?

Advertisements

This post will require some very basic knowledge of category theory (like, what a category is, and how to make a poset into a category).  For everything below, I will be a bit informal, but I will essentially mean that A, B are objects in a category, and f:A\to B is some morphism between them which is also in the category.

 

The "natural" extension of the notion of a surjective map (in, say, the category of sets) is

 

Definition.  A map f:A\to B is an epimorphism if, for each object Z and map g,g':B\to Z we have that if g\circ f = g'\circ f then g = g'.

 

You should prove for yourself that this is, in fact, what a surjective map "does" in the category of sets.  Pretty neat.  Similarly, for injective maps (in, say, the category of sets) we have the more general notion:

 

Definition. A map f:A\to B is a monomorphism if, for each object Z and map g,g':Z\to A we have that if f\circ g = f\circ g' then g = g'.

 

Again, you should prove for yourself that this is the property that injective mappings have in the category of sets.  Double neat.  There is also a relatively nice way to define an isomorphism categorically — which is somewhat obvious if you’ve seen some algebraic topology before.

 

Definition. A map f:A\to B is an isomorphism if there is some mapping g:B\to A such that f\circ g = 1_{B} and g\circ f = 1_{A}, where 1_{A},1_{B} denote the identity morphism from the subscripted object to itself.

 

Now, naively, one might think, "Okay, if I have some certain kind of morphism in my category (set-maps, homomorphisms, homeomorphisms, poset relations, …) then if it is an epimorphism and a monomorphism, it should automatically be an isomorphism."  Unfortunately, this is not the case.  Here’s two simple examples.

 

Example (Mono, Epi, but not Iso).  The most simple category for which this works is the category 2, which I’ve drawn below:

image

There are two objects, a,b and three morphisms, the identites and the morphism f:a\to b.  First, prove to yourself that this is actually a category.  Second, we note that f:a\to b is an epimorphism: the only map from A\to A is the identity, and there is no mapping from b\to a, so the property trivially holds.  Third, we note that f:a\to b is a monomorphism for the exact same reason as before.  Last, we note that f is not an isomorphism: we would need some g: b\to a which satisfied the properties in the definition above…but, there is no map from b\to a.  Upsetting!  From this, we must conclude that f cannot be an isomorphism despite being a mono- and epimorphism.

 

 

Similar Example (Mono, Epi, but not Iso).  Take the category ({\mathbb N}, \leq), the natural numbers with morphisms as the relation \leq.  Which morphisms are the monomorphisms?  Which morphisms are the epimorphisms?  Prove that the only isomorphisms are the identity morphisms.  Conclude that there are a whole bunch of morphisms which are mono- and epimorphisms but which are not isomorphisms.

Let’s talk about Burnside’s theorem.  First, let me note that there are two results commonly called "Burnside’s Theorem."  The first one that I learned (which we won’t be discussing in this post) was:

 

Theorem (Burnside).  If G is a finite group of order p^{a}q^{b} where a,b are non-negative integers and where p,q are primes, then G is solvable.

 

The second one is also a group theoretical result, but a bit more combinatorial-feeling.  In some books (and, apparently, Wikipedia) this second result is called Burnside’s Lemma.  As noted in the Wikipedia article, this theorem was not even due to Burnside, who quoted the result from Frobenius, who probably got it from Cauchy. 

Let’s get some definitions down.  As usual, we’ll denote the order of the group by |G|, and our groups will all be finite in this post.  If we have a group G which acts on a set X, then given some fixed g\in G we define the set Fix_{X}(g) = \{x\in X\,|\, g\cdot x = x\}; this is, of course, the fixed points in X when acted on by the element g; for the remainder of the post, we will simply write Fix(g) with the set X implied.  Remember, when a group acts on X, the "product" will sit inside of X, and we write the action as g\cdot x for an element g\in G acting on an element x\in X.  The orbit of a point x\in X when acted on by G is given by \{g\cdot x\,|\, g\in G\}; we’ll denote this Orb(x), though this is not standard notation.  The orbit, essentially, is all of the possible values y\in X that you can get to by acting on x with elements in your group.

One thing to note, also, is that orbits are pairwise disjoint.  You should prove this to yourself if you haven’t already, but the idea is like this: if A,B are orbits of elements in X then suppose z\in A\cap B; then there is some x\in A, y\in B, g,h\in G such that g\cdot x = h\cdot y = z, but this implies (h^{-1}g)\cdot x = 1\cdot y = y which implies the orbits are identical (why?).  Hence, each element of X is in exactly one orbit

 

We need one more result before we can sink our teeth into Burnside.  Remember the fixed point set above?  This was all of the elements x such that g\cdot x = x for some g.  There’s a similar notion called a Stabilizer, denoted G_{x} = \{g\in G\,|\, g\cdot x = x\}; this is saying that we first fix x\in X, and then look at all the elements g\in G which stabilize it.  These definitions are pretty similar feeling (almost like family!) and, in fact, there is a nice relation between the two:

 

Notation.  Let X/G denote the set of orbits of X when acted on by G; when X is a group and G is a subgroup this is the same as a quotient.

Theorem (Orbit-Stabilizer Theorem).  There is a bijection between G/G_{x} and Orb(x)

 

That is, if we act on G by elements of g which fix x, then we will have the same number of elements as the orbit of x.  This might seem a little confusing at first, but if you work through it, it’s not so weird. 

 

Sketch of the Proof.  (Skip this if you’re not comfortable with all this notation above; just go down to the next theorem.)  Here, we want to show a bijection.  Notice that $G/G_{x}$ is the set of cosets hG_{x} for h\in G.  We claim that the mapping \phi which sends \phi: hG_{x} \mapsto h\cdot x is well-defined, injective and surjective (but not a homomorphism).  First, well defined: if hG_{x} = kG_{x} then $latex $hk^{-1}\in G_{x}$ which means that (hk^{-1})\cdot x = x.  This implies, after some manipulation, that h\cdot x = k\cdot x, which means these elements are identical in Orb(x).  Second, surjectivity is clear.  Last, if h\cdot x = g\cdot x in the orbit, then (g^{-1}h)\cdot x = x which implies g^{-1}h\in G_{x} which gives gG_{x} = hG_{x}; hence this map is injective.  This gives us that our map \phi is bijective.  \diamond

 

One immediate corollary is that |Orb(x)| = \dfrac{|G|}{|G_{x}|}; that is, the number of elements in the orbit of x is the same as the number of elements in G divided by the number of elements in g which fix x.  Think about this for a minute. 

 


 

Road to the Proof.

Okay.  Now, let’s think about something for a second.  What is the sum

\displaystyle \sum_{g\in G}|Fix(g)|

telling us?  This is the number of elements in x which are fixed by some g\in G; but there might be some overlap, since if g\cdot x = x and h\cdot x = x, then x will be counted twice: one as an element of Fix(g) and once as an element of Fix(h).  But how much overlap is there?  This is an innocent seeming question, and you might think something like, "Well, depends on how much stuff stabilizes each x.", and this is pretty close to the point. 

First, note that

\displaystyle \sum_{g\in G}|Fix(g)| = |\{(g,x)\in G\times X\,|\,g\cdot x = x\}|

which is just the long way to write out this sum; but, the nice part about that is, we can now think about this as all of the elements of G which are stabilized by some x\in X (why?).  Then,

\displaystyle \sum_{g\in G}|Fix(g)| = \sum_{x\in X}|G_{x}|.

If you don’t see this, you should prove to yourself why they’re the same sum (why is each element counted in the left-hand side also counted in the right-hand side?).  Now, by the Orbit-Stabilizer theorem above, this right-hand sum becomes pretty nice.  Specifically,

\displaystyle \sum_{x\in X}|G_{x}| = \sum_{x\in X}\frac{|G|}{|Orb(x)|} = |G|\sum_{x\in X}\frac{1}{|Orb(x)|}

where we noted in the last equality that |G| is a constant, so we may pull it out of the sum. 

Recalling that X/G denotes the number of orbits, we have that if we take a single orbit (call it A) we will be adding \frac{1}{|A|} up exactly |A| times (since the sum is taken over each x\in X so, in particular, over each $x\in A$); hence, we will add \frac{1}{|A|}\cdot |A| = 1 for each orbit we have in X/G.  That is;

\displaystyle = |G|\sum_{A\in X/G}1 = |G||X/G|.

Putting this all together, we have

\displaystyle \sum_{g\in G}|Fix(g)| = |G||X/G|.

 

We clean it up a bit, and state the following:

 

Theorem (Burnside’s).  For a finite group G and a set X, with notation as above, we have \displaystyle |X/G| = \frac{1}{|G|}\sum_{g\in G}|Fix(g)|

 

That is, the number of orbits is equal to the sum, over g\in G, of the elements of x fixed under g, averaged by the number of elements in G.  Kind of neat.

 

Next time, we’ll talk about applications!

[Note: It’s been a while!  I’ve now completed most of my pre-research stuff for my degree, so now I can relax a bit and write up some topics.  This post will be relatively short just to “get back in the swing of things.”]

 

In Group Theory, the big question used to be, “Given such-and-such is a group, how can we tell which group it is?”

 

https://i2.wp.com/upload.wikimedia.org/wikipedia/commons/9/9b/Ludwig_Sylow.jpg

 

The Sylow Theorems (proved by Ludwig Sylow, above) provide a really nice way to do this for finite groups using prime decomposition.  In most cases, the process is quite easy.  We’ll state the theorems here in a slightly shortened form, but you can read about them here.  Note that subgroup which is of order p^{\beta} for some \beta is unsurprisingly called a p-subgroup. A p-subgroup of maximal order in G is called a Sylow p-subgroup.

 

Theorem (Sylow).  Let G be a group such that |G| = p^{\alpha}m for p\not| m.  Then,

  1. There exists at least one subgroup of order p^{\alpha}.
  2. The Sylow p-subgroups are conjugate to one-another; that is, if P,Q are Sylow p-subgroups, then there is some g\in G such that gPg^{-1} = Q.  Moreover, for all g\in G, we have that gPg^{-1} is a Sylow p-subgroup.
  3. The number of Sylow p-subgroups of G, denoted by n_{p}, is of the form n_{p} \equiv 1\mbox{ mod } p.  In other words, n_{p} divides m.

 

This first part says that the group of Sylow p-subgroups of G is not empty if p divides the order of G.  Note that this is slightly abbreviated (the second part is actually more general, and the third part has a few extra parts) but this will give us enough to work with.

 

Problem: Given a group  |G| = pq for p,q prime and p < q, is G ever simple (does it have any nontrivial normal subgroups)?  Can we say explicitly what G is?

 

We use the third part of the Sylow theorems above.  We note that n_{q} | p and n_{q} \equiv 1\mbox{ mod } q, but p < q so this immediately implies that n_{q} = 1 (why?).  So we have one Sylow q-subgroup; let’s call it Q.  Once we have this, we can use the second part of the Sylow theorem: since for each g\in G we have gQg^{-1} is a Sylow q-subgroup, but we’ve shown that Q is the only one there is!  That means that gQg^{-1} = Q; this says Q is normal in G.  We have, then, that G isn’t simple.  Bummer.

On the other hand, we can actually say what this group is.  So let’s try that.  We know the Sylow Q-subgroup, but we don’t know anything about the Sylow P-subgroups.  We know that n_{p} \equiv 1\mbox{ mod }p and n_{p} | q, but that’s about it.  There are two possibilities: either n_{p} = 1 or n_{p} = q.

For the first case, by using the modular relation, if p does not divide q-1 then this forces n_{p} = 1; this gives us a unique normal Sylow p-subgroup P.  Note that since the orders of our normal subgroups multiply up to the order of the group, we have PQ \cong P\times Q \cong G; in other words, G \cong {\mathbb Z}_{p}\times {\mathbb Z}_{q}.

For the second case, n_{p} = q.  We will have a total of q subgroups of order p and none of these are normal.   This part is a bit more involved (for example, see this post on it), but the punch line is that it will be the cyclic group {\mathbb Z}_{pq}.

 

I’ll admit that the last part is a bit hand-wavy, but this should at least show you the relative power of the Sylow theorems.  They also come in handy when trying to show something either does or does not have a normal subgroup.  Recall that a simple group has no nontrivial normal subgroups.

 

Question.  Is there any simple group with |G| = 165?

 

I just picked this number randomly, but it works pretty well for this example.  We note that |G| = 165 = 3\cdot 5\cdot 11.  Let’s consider, for kicks, n_{11}.  We know n_{11} must divide 3\cdot 5 = 15 and it must be the case that n_{11} \equiv 1\mbox{ mod } 11; putting these two facts together, we get n_{11} = 1.  This immediately gives us a normal subgroup of order 11, which implies there are no simple groups of order 165.

 

Question.  Is there any simple group with |G| = 777?

 

Alas, alack, you may say that 777 is too big of a number to do, but you’d be dead wrong.  Of course, 777 = 3\cdot 7\cdot 37.  Use the same argument as above to show there are no simple groups of this order.

 

Question.  Is there any simple group with |G| = 28?

 

Note that 28 = 2^{2}\cdot 7 so we need to do a little work, but not much.  Just for fun, let’s look at n_{7}.  We must have that it is 1 modulo 7 and it must divide 2^{2} = 4.  Hm.  A bit of thinking will give you that n_{7} = 1, which gives us the same conclusion as above.

 

Of course, there are examples where this doesn’t work nicely.  Think about the group of order 56, for example.  In order to work with these kinds of groups, one must do a bit more digging.  We will look into more of this later.

Reader Beware.

I planned to do a post about tensor products (what they are, why we should care, what we do with them, etc.) but because I’m not comfortable with all of that quite yet, I’m going to assume you know what tensor products are, and do a few explicit calculations.  So, in short, if you don’t already know what tensor products are, don’t read this post.

Our notation will be as follows: k is a field, R is a commutative ring with 1\neq 0, and \otimes_{R} will denote the tensor product of modules over a ring R.  As usual, R[x] will denote the polynomials in x with coefficients in R.

(Note:  My thanks to Brooke, who pointed out that I kept writing "+" when I meant "\otimes."  I hope I’ve not made this error elsewhere, as tensors are "pretty different" from standard addition.)

Read the rest of this entry »

Wordy Introduction, Motivation.

When you first start high school algebra, the big thing is FOIL-ing, right?  Factoring and factorizing quadratics.  When you get to calculus, the big things are derivatives and integrals.  Then when you get to college and start doing math, things get a little tougher.  We start learning about abstract structures, and these become increasingly specific and increasingly complex as we go along.

Read the rest of this entry »

What the Hell is a Module?

October 12, 2010

This post is going to be a gentle introduction to what a module is.  It isn’t hard, but, for me, modules were sort of just “thrown in” with a whole bunch of defining properties and no motivation for why I should care about them.  I’m hoping to motivate them at least a little bit so that you feel more comfortable thinking and working with them!

Read the rest of this entry »

Little post.  Because I love doing things that comments tell me to do, we’re going to use Lagrange to prove a neato theorem.  Now, normally, if I told you, “Hey, guy, I’ve got a group G with n elements.  What one is it?” you’d probably be unable to tell me!  Why?  Lots of different groups have the same order!  For example, if we’re talking about order 8, are we talkin’ D_{8}?  Are we talkin’ Z_{8}?  Are we talkin’ Q_{8}?  I just don’t know!

Read the rest of this entry »

How could I have been so naive?  How could I have been so myopic?  How is it that I thought I could just wrap up group theory without mentioning Lagrange’s theorem?  How could I let this topic die out not with a bang but with a whimper?

Let us, for old time’s sake, state one more theorem for the group theory primer — and this one’s a biggie!  Remember how division is defined for rational numbers?  \frac{a}{b} sort of means “split a into little piles of size b, and \frac{a}{b} is how many piles there are.”  For example, if we have 12 batteries and put them into piles of 3 batteries each, how many piles do we have?  This doesn’t take a rocket scientist.

Read the rest of this entry »

Last time we talked about a whole lot of stuff.  We did homomorphisms, isomorphisms, and talked about the first ismorphism theorem.  What did this one state?  It states that for G,H are groups and f:G\rightarrow H is a homomorphism, then we have that G/Ker(f) \cong Im(f), or, in other words, the quotient of G with the kernel of the map is equal to the image of the map.  This makes sense if you think about it: we’re kind of condensing everything that goes to 0 when we map it away from G and we say that these elements ultimately don’t matter in the image — but, because of the nice properties of homomorphisms, a lot of other elements map onto each other, too.

Today, we’re going to discuss the final two isomorphism theorems (which don’t come up as often, but they’re nice) and conclude with one of the most used theorems in elementary abstract algebra: Cauchy’s Theorem.

Read the rest of this entry »