Let’s talk about Burnside’s theorem.  First, let me note that there are two results commonly called "Burnside’s Theorem."  The first one that I learned (which we won’t be discussing in this post) was:


Theorem (Burnside).  If G is a finite group of order p^{a}q^{b} where a,b are non-negative integers and where p,q are primes, then G is solvable.


The second one is also a group theoretical result, but a bit more combinatorial-feeling.  In some books (and, apparently, Wikipedia) this second result is called Burnside’s Lemma.  As noted in the Wikipedia article, this theorem was not even due to Burnside, who quoted the result from Frobenius, who probably got it from Cauchy. 

Let’s get some definitions down.  As usual, we’ll denote the order of the group by |G|, and our groups will all be finite in this post.  If we have a group G which acts on a set X, then given some fixed g\in G we define the set Fix_{X}(g) = \{x\in X\,|\, g\cdot x = x\}; this is, of course, the fixed points in X when acted on by the element g; for the remainder of the post, we will simply write Fix(g) with the set X implied.  Remember, when a group acts on X, the "product" will sit inside of X, and we write the action as g\cdot x for an element g\in G acting on an element x\in X.  The orbit of a point x\in X when acted on by G is given by \{g\cdot x\,|\, g\in G\}; we’ll denote this Orb(x), though this is not standard notation.  The orbit, essentially, is all of the possible values y\in X that you can get to by acting on x with elements in your group.

One thing to note, also, is that orbits are pairwise disjoint.  You should prove this to yourself if you haven’t already, but the idea is like this: if A,B are orbits of elements in X then suppose z\in A\cap B; then there is some x\in A, y\in B, g,h\in G such that g\cdot x = h\cdot y = z, but this implies (h^{-1}g)\cdot x = 1\cdot y = y which implies the orbits are identical (why?).  Hence, each element of X is in exactly one orbit


We need one more result before we can sink our teeth into Burnside.  Remember the fixed point set above?  This was all of the elements x such that g\cdot x = x for some g.  There’s a similar notion called a Stabilizer, denoted G_{x} = \{g\in G\,|\, g\cdot x = x\}; this is saying that we first fix x\in X, and then look at all the elements g\in G which stabilize it.  These definitions are pretty similar feeling (almost like family!) and, in fact, there is a nice relation between the two:


Notation.  Let X/G denote the set of orbits of X when acted on by G; when X is a group and G is a subgroup this is the same as a quotient.

Theorem (Orbit-Stabilizer Theorem).  There is a bijection between G/G_{x} and Orb(x)


That is, if we act on G by elements of g which fix x, then we will have the same number of elements as the orbit of x.  This might seem a little confusing at first, but if you work through it, it’s not so weird. 


Sketch of the Proof.  (Skip this if you’re not comfortable with all this notation above; just go down to the next theorem.)  Here, we want to show a bijection.  Notice that $G/G_{x}$ is the set of cosets hG_{x} for h\in G.  We claim that the mapping \phi which sends \phi: hG_{x} \mapsto h\cdot x is well-defined, injective and surjective (but not a homomorphism).  First, well defined: if hG_{x} = kG_{x} then $latex $hk^{-1}\in G_{x}$ which means that (hk^{-1})\cdot x = x.  This implies, after some manipulation, that h\cdot x = k\cdot x, which means these elements are identical in Orb(x).  Second, surjectivity is clear.  Last, if h\cdot x = g\cdot x in the orbit, then (g^{-1}h)\cdot x = x which implies g^{-1}h\in G_{x} which gives gG_{x} = hG_{x}; hence this map is injective.  This gives us that our map \phi is bijective.  \diamond


One immediate corollary is that |Orb(x)| = \dfrac{|G|}{|G_{x}|}; that is, the number of elements in the orbit of x is the same as the number of elements in G divided by the number of elements in g which fix x.  Think about this for a minute. 



Road to the Proof.

Okay.  Now, let’s think about something for a second.  What is the sum

\displaystyle \sum_{g\in G}|Fix(g)|

telling us?  This is the number of elements in x which are fixed by some g\in G; but there might be some overlap, since if g\cdot x = x and h\cdot x = x, then x will be counted twice: one as an element of Fix(g) and once as an element of Fix(h).  But how much overlap is there?  This is an innocent seeming question, and you might think something like, "Well, depends on how much stuff stabilizes each x.", and this is pretty close to the point. 

First, note that

\displaystyle \sum_{g\in G}|Fix(g)| = |\{(g,x)\in G\times X\,|\,g\cdot x = x\}|

which is just the long way to write out this sum; but, the nice part about that is, we can now think about this as all of the elements of G which are stabilized by some x\in X (why?).  Then,

\displaystyle \sum_{g\in G}|Fix(g)| = \sum_{x\in X}|G_{x}|.

If you don’t see this, you should prove to yourself why they’re the same sum (why is each element counted in the left-hand side also counted in the right-hand side?).  Now, by the Orbit-Stabilizer theorem above, this right-hand sum becomes pretty nice.  Specifically,

\displaystyle \sum_{x\in X}|G_{x}| = \sum_{x\in X}\frac{|G|}{|Orb(x)|} = |G|\sum_{x\in X}\frac{1}{|Orb(x)|}

where we noted in the last equality that |G| is a constant, so we may pull it out of the sum. 

Recalling that X/G denotes the number of orbits, we have that if we take a single orbit (call it A) we will be adding \frac{1}{|A|} up exactly |A| times (since the sum is taken over each x\in X so, in particular, over each $x\in A$); hence, we will add \frac{1}{|A|}\cdot |A| = 1 for each orbit we have in X/G.  That is;

\displaystyle = |G|\sum_{A\in X/G}1 = |G||X/G|.

Putting this all together, we have

\displaystyle \sum_{g\in G}|Fix(g)| = |G||X/G|.


We clean it up a bit, and state the following:


Theorem (Burnside’s).  For a finite group G and a set X, with notation as above, we have \displaystyle |X/G| = \frac{1}{|G|}\sum_{g\in G}|Fix(g)|


That is, the number of orbits is equal to the sum, over g\in G, of the elements of x fixed under g, averaged by the number of elements in G.  Kind of neat.


Next time, we’ll talk about applications!


Little post.  Because I love doing things that comments tell me to do, we’re going to use Lagrange to prove a neato theorem.  Now, normally, if I told you, “Hey, guy, I’ve got a group G with n elements.  What one is it?” you’d probably be unable to tell me!  Why?  Lots of different groups have the same order!  For example, if we’re talking about order 8, are we talkin’ D_{8}?  Are we talkin’ Z_{8}?  Are we talkin’ Q_{8}?  I just don’t know!

Read the rest of this entry »

How could I have been so naive?  How could I have been so myopic?  How is it that I thought I could just wrap up group theory without mentioning Lagrange’s theorem?  How could I let this topic die out not with a bang but with a whimper?

Let us, for old time’s sake, state one more theorem for the group theory primer — and this one’s a biggie!  Remember how division is defined for rational numbers?  \frac{a}{b} sort of means “split a into little piles of size b, and \frac{a}{b} is how many piles there are.”  For example, if we have 12 batteries and put them into piles of 3 batteries each, how many piles do we have?  This doesn’t take a rocket scientist.

Read the rest of this entry »

Last time we talked about a whole lot of stuff.  We did homomorphisms, isomorphisms, and talked about the first ismorphism theorem.  What did this one state?  It states that for G,H are groups and f:G\rightarrow H is a homomorphism, then we have that G/Ker(f) \cong Im(f), or, in other words, the quotient of G with the kernel of the map is equal to the image of the map.  This makes sense if you think about it: we’re kind of condensing everything that goes to 0 when we map it away from G and we say that these elements ultimately don’t matter in the image — but, because of the nice properties of homomorphisms, a lot of other elements map onto each other, too.

Today, we’re going to discuss the final two isomorphism theorems (which don’t come up as often, but they’re nice) and conclude with one of the most used theorems in elementary abstract algebra: Cauchy’s Theorem.

Read the rest of this entry »

Last time we went over some normal subgroups, how to direct product two groups, and how to quotient out by (normal) subgroups.  As we said before, though, groups (like vector spaces) are pretty boring by themselves.  Yes, studying groups by themselves can give us relations between elements and so on (like what kinds of elements in a particular group have the property such that when you square them they become the identity), but, like vector spaces, we can learn a lot about a group by what it can and can’t map into nicely.

Now, let’s think about this for a second.  What if I said something like the following: let’s take a group G such that the elements are \mbox{possum} and \mbox{fox}.  Let’s say that

\mbox{possum}\times \mbox{possum} = \mbox{possum}

\mbox{possum} \times \mbox {fox} = \mbox{fox} \times \mbox{possum} = \mbox{possum}

\mbox{fox} \times \mbox{fox} = \mbox{fox}

and those are all the possible interactions.  You could give any reasonable justification to this group, but it reduces to the fact that it is just a group: it’s just a set of elements and an operation.

Read the rest of this entry »

If we think about groups as if they were numbers, we’d want to add, subtract, multiply, and divide stuff.  Unfortunately, groups aren’t as simple as numbers, and we have more complex notions of what all of these things should correspond to.

Read the rest of this entry »

Last time, we talked about what a group is.  This time, we’ll go over some specific groups.  In the next post, we’re going to go over some basic theorems about groups.

Read the rest of this entry »

Personal Motivation: This morning I awoke from a dream and all I could think about was manipulating group elements as if they were linear maps.  We’ve been talking about linear maps a lot, and one of their nice properties is that they can be represented by a matrix; if we were to represent group elements as matrices, then we would be able to use a lot of the linear algebra we know to prove a few things about groups!  In fact, this type of thinking has a name: representation theory.  I won’t lie to you, readers: I’ve taken a class in this, but I hated it and paid very little attention in it.  Despite this, I’m going to begin going over the text and select some nice theorems to write about.

What I’m actually going to write about in this post: Because groups are so damn important in abstract algebra, I’m going to take this post to construct them.  Because this would be quite boring to the general mathematician who has already taken abstract algebra, I’m going to do it in a slightly weird way: I’m going to build them up by adding structure to sets.

Read the rest of this entry »