## The Burnside Theorem and Counting, part I.

### December 8, 2012

Let’s talk about Burnside’s theorem. First, let me note that there are *two *results commonly called "Burnside’s Theorem." The first one that I learned (which we won’t be discussing in this post) was:

**Theorem (Burnside)**. If is a finite group of order where are non-negative integers and where are primes, then is solvable.

The second one is also a group theoretical result, but a bit more combinatorial-feeling. In some books (and, apparently, Wikipedia) this second result is called Burnside’s Lemma. As noted in the Wikipedia article, this theorem was not even due to Burnside, who quoted the result from Frobenius, who probably got it from Cauchy.

Let’s get some definitions down. As usual, we’ll denote the *order* of the group by , and our groups will *all be finite in this post*. If we have a group which acts on a set , then given some fixed we define the set ; this is, of course, the fixed points in when acted on by the element ; for the remainder of the post, we will simply write with the set implied. Remember, when a group acts on , the "product" will sit inside of , and we write the action as for an element acting on an element . The *orbit* of a point when acted on by is given by ; we’ll denote this , though this is not standard notation. The orbit, essentially, is all of the possible values that you can get to by acting on with elements in your group.

One thing to note, also, is that orbits are *pairwise* *disjoint. *You should prove this to yourself if you haven’t already, but the idea is like this: if are orbits of elements in then suppose ; then there is some such that , but this implies which implies the orbits are identical (why?). Hence, each element of is in *exactly one orbit*.

We need one more result before we can sink our teeth into Burnside. Remember the fixed point set above? This was all of the elements such that for some . There’s a similar notion called a *Stabilizer*, denoted ; this is saying that we first fix , and then look at all the elements which stabilize it. These definitions are pretty similar feeling (almost like family!) and, in fact, there is a nice relation between the two:

**Notation. **Let denote the set of orbits of when acted on by ; when is a group and is a subgroup this is the same as a quotient.

**Theorem (Orbit-Stabilizer Theorem).** There is a bijection between and .

That is, if we act on by elements of which fix , then we will have the same number of elements as the orbit of . This might seem a little confusing at first, but if you work through it, it’s not so weird.

*Sketch of the Proof. *(Skip this if you’re not comfortable with all this notation above; just go down to the next theorem.) Here, we want to show a bijection. Notice that $G/G_{x}$ is the set of cosets for . We claim that the mapping which sends is well-defined, injective and surjective (but not a homomorphism). First, well defined: if then $latex $hk^{-1}\in G_{x}$ which means that . This implies, after some manipulation, that , which means these elements are identical in . Second, surjectivity is clear. Last, if in the orbit, then which implies which gives ; hence this map is injective. This gives us that our map is bijective.

One immediate corollary is that ; that is, the number of elements in the orbit of is the same as the number of elements in divided by the number of elements in which fix . Think about this for a minute.

## Road to the Proof.

Okay. Now, let’s think about something for a second. What is the sum

telling us? This is the number of elements in which are fixed by some ; but there might be some overlap, since if and , then will be counted twice: one as an element of and once as an element of . But how much overlap is there? This is an innocent seeming question, and you might think something like, "Well, depends on how much stuff stabilizes each .", and this is pretty close to the point.

First, note that

which is just the long way to write out this sum; but, the nice part about that is, we can now think about this as all of the elements of which are stabilized by some (why?). Then,

If you don’t see this, you should prove to yourself why they’re the same sum (why is each element counted in the left-hand side also counted in the right-hand side?). Now, by the Orbit-Stabilizer theorem above, this right-hand sum becomes pretty nice. Specifically,

where we noted in the last equality that is a constant, so we may pull it out of the sum.

Recalling that denotes the number of orbits, we have that if we take a single orbit (call it ) we will be adding up exactly times (since the sum is taken over each so, in particular, over each $x\in A$); hence, we will add for each orbit we have in . That is;

Putting this all together, we have

We clean it up a bit, and state the following:

**Theorem (Burnside’s). **For a finite group and a set , with notation as above, we have .

That is, the number of orbits is equal to the sum, over , of the elements of fixed under , averaged by the number of elements in . Kind of neat.

Next time, we’ll talk about applications!

## Applying Lagrange!: Groups of Prime Orders.

### June 29, 2010

Little post. Because I love doing things that comments tell me to do, we’re going to use Lagrange to prove a neato theorem. Now, normally, if I told you, “Hey, guy, I’ve got a group with elements. What one is it?” you’d probably be unable to tell me! Why? Lots of different groups have the same order! For example, if we’re talking about order 8, are we talkin’ ? Are we talkin’ ? Are we talkin’ ? I just don’t know!

How could I have been so naive? How could I have been so myopic? How is it that I thought I could just wrap up group theory without mentioning Lagrange’s theorem? How could I let this topic die out not with a bang but with a whimper?

Let us, for old time’s sake, state one more theorem for the group theory primer — and this one’s a biggie! Remember how division is defined for rational numbers? sort of means “split into little piles of size , and is how many piles there are.” For example, if we have 12 batteries and put them into piles of 3 batteries each, how many piles do we have? This doesn’t take a rocket scientist.

Last time we talked about a whole lot of stuff. We did homomorphisms, isomorphisms, and talked about the first ismorphism theorem. What did this one state? It states that for are groups and is a homomorphism, then we have that , or, in other words, the quotient of with the kernel of the map is equal to the image of the map. This makes sense if you think about it: we’re kind of condensing everything that goes to 0 when we map it away from and we say that these elements ultimately don’t matter in the image — but, because of the nice properties of homomorphisms, a lot of other elements map onto each other, too.

Today, we’re going to discuss the final two isomorphism theorems (which don’t come up as often, but they’re nice) and conclude with one of the most used theorems in elementary abstract algebra: Cauchy’s Theorem.

## Group Theory Primer, part 4: everything you wanted to know about homomorphisms but were afraid to ask.

### June 23, 2010

Last time we went over some normal subgroups, how to direct product two groups, and how to quotient out by (normal) subgroups. As we said before, though, groups (like vector spaces) are pretty boring by themselves. Yes, studying groups by themselves can give us relations between elements and so on (like what kinds of elements in a particular group have the property such that when you square them they become the identity), but, like vector spaces, we can learn a lot about a group by what it can and can’t map into nicely.

Now, let’s think about this for a second. What if I said something like the following: let’s take a group such that the elements are and . Let’s say that

and those are all the possible interactions. You could give any reasonable justification to this group, but it reduces to the fact that it is just a group: it’s just a set of elements and an operation.

If we think about groups as if they were numbers, we’d want to add, subtract, multiply, and divide stuff. Unfortunately, groups aren’t as simple as numbers, and we have more complex notions of what all of these things should correspond to.

## Group Theory Primer, part 2: examples of a few groups.

### June 19, 2010

Last time, we talked about what a group is. This time, we’ll go over some specific groups. In the next post, we’re going to go over some basic theorems about groups.

## Group Theory Primer, part 1: what is a group?

### June 12, 2010

**Personal Motivation: **This morning I awoke from a dream and all I could think about was manipulating group elements as if they were linear maps. We’ve been talking about linear maps a lot, and one of their nice properties is that they can be represented by a matrix; if we were to represent group elements as matrices, then we would be able to use a lot of the linear algebra we know to prove a few things about groups! In fact, this type of thinking has a name: representation theory. I won’t lie to you, readers: I’ve taken a class in this, but I hated it and paid very little attention in it. Despite this, I’m going to begin going over the text and select some nice theorems to write about.

**What I’m actually going to write about in this post: **Because groups are so damn important in abstract algebra, I’m going to take this post to construct them. Because this would be quite boring to the general mathematician who has already taken abstract algebra, I’m going to do it in a slightly weird way: I’m going to build them up by adding structure to sets.