## Group Theory Primer, part 1: what is a group?

### June 12, 2010

Personal Motivation: This morning I awoke from a dream and all I could think about was manipulating group elements as if they were linear maps.  We’ve been talking about linear maps a lot, and one of their nice properties is that they can be represented by a matrix; if we were to represent group elements as matrices, then we would be able to use a lot of the linear algebra we know to prove a few things about groups!  In fact, this type of thinking has a name: representation theory.  I won’t lie to you, readers: I’ve taken a class in this, but I hated it and paid very little attention in it.  Despite this, I’m going to begin going over the text and select some nice theorems to write about.

What I’m actually going to write about in this post: Because groups are so damn important in abstract algebra, I’m going to take this post to construct them.  Because this would be quite boring to the general mathematician who has already taken abstract algebra, I’m going to do it in a slightly weird way: I’m going to build them up by adding structure to sets.

Let’s begin.  What is a set?  We can imagine it as kind of a barrel of things.  It can literally be a barrel of anything.  For example,

This is a set of things!  A fennec fox, an orange, the number 2,543,100, and the word "possum."  Generally, though, this drawing of pictures to make sets can get tiring, so we generally write our sets between braces in the following way: $\{\mbox{possum}, \mbox{orange}, \mbox{fennec fox}, 25431000\}$.

There are literally limitless numbers of sets: you can create one just by taking a bunch of objects and put them between braces.  The only restriction that sets have is that there cannot be more than one of the same object in each set.  For example, the set $\{1,1,1,1,2,2,2,2,2,3,3,3,3,3\}$ is just the same as $\{1,2,3\}$ since we disregard repetition.

In algebra when we work with sets they are usually sets of numbers or algebraic variables.  We almost never work with possums or oranges, unfortunately.

So what can we do with sets?  Well.  Not that much, actually.  We can count them, we can map them back and forth arbitrarily, and we can look at them a lot.  Because there is almost no structure on sets (specifically, every element is essentially the same as every other element in a set, since there’s no distinction that can be made between them) they are hard to work with.  So, let’s give them a little bit of structure.

Why don’t we do something fun.  Let’s define some kind of operation (which we will call "dot" for short).  Note that this is NOT automatically associative.  Let’s say that we’re given some set $S$ and we say that, for every two elements $x,y\in S$ we define $x\cdot y = z$ for some $z\in S$.  Note that it doesn’t matter what $z$ is so long as that it’s actually in the set $S$; in other words, we want $S$ to be closed under our binary (taking two inputs, like addition, subtraction, etc.) operation.  In other words, if we dot things in our set together, we want the product to be another element of the set.  This sort of thing is called a magma, and we’ll call our magma $M = (S, \cdot)$ or just $M$ for short when the operation is implied.

All things considered, that’s not a huge deal.  We can make really trivial magmas at this point, but let’s give an example of a nice magma.  Let’s take our set to be the set of natural number ${\mathbb N}$ and our operation to be the standard plus operator.  We can show that this is a magma: ${\mathbb N} = \{1,2,3,4,5,\dots\}$ and if we take $x + y$ for $x,y\in {\mathbb N}$ then the sum is going to be an integer greater than both x and y.  Luckily, every positive integer is accounted for here, so it must be in the set*.

*Note that a more careful proof would perhaps require me to write every number as a sum of 1’s: in this case, $x + y = (1 +\dots + 1)+(1 +\dots+ 1)$, which is, itself, a sum of 1’s.  This also would require proof that the plus operation is associative.

Either way, we have one magma here.  Can we make another?  Sure, if we took the natural numbers and used multiplication instead of addition.  That’s a good example.  We could even do silly things like taking the real non-zero numbers under division or under "subtracting 1 from the product" or whatever strange operations you can think of.

Ultimately, magmas are not that great to work with, simply because there is not quite enough structure.  It is relatively easy to work with small magmas, but when we try to generalize to larger magmas, it’s difficult to even see how many magmas of a certain size there can be: the problem is that associativity of the operation isn’t guaranteed.  Upsetting.

So, let’s try to guarantee just that.  Let’s have our magma $M = (S,\cdot)$, but in this case, we’ll require dot to be associative.  In this case, $M$ becomes a semi-group.  A semi-group, if you’re keeping score, is just a set which is closed under an associative binary operation.  There are many examples of semi-groups, including the natural numbers and plus that we did above.  Feel free to check that this is, in fact, a semi-group.  In fact, it’s kind of tough to think of magmas which are not semi-groups, and even constructing them feels somehow artificial: most things that we look at in mathematics behave nicely like semi-groups.  These are, therefore, a nicer structure to look at!  They’re, at least, prettier.

Now, when we add integers together, what happens when we add zero to something?  We get "back" to that number.  For example, $4 + 0 = 4$.  This is cool: it’s almost like we didn’t even do anything to the original 4 when we added something to it.  Same thing happens when we think about multiplying or dividing by 1: the "other number" in the product stays the same.  This kind of number is called an identity, and they’re very nice elements!

Why wouldn’t we want identities in our semi-groups?  I’m not sure; I always like identities in stuff that I work on.  It just makes things nicer: you can operate on elements and not change them, you can try to take some number and "get back to the identity" by operating on it a number of times, and so on.  Well, semi-groups don’t guarantee identities, so if we say that there must be an identity in our semi-group, we need to call it something new.  Of course, there is already a name for such a structure: a monoid

If you’re having trouble keeping track of all this, we have that a monoid is a set $S$ closed under an associative operation called "dot" that has an identity with respect to this operation.  So not that big of’a deal.  Monoids are actually used quite a bit in mathematics, and there are a number of good examples of monoids:

• We can take any semi-group and we can throw into the set an identity element $e$ such that $x\cdot e = e\cdot x = x$.  This makes the semi-group a monoid.
• The natural numbers with zero under addition is a monoid.
• The natural numbers under multiplication is a monoid.
• If we take some set of letters $S = \{a,b,c,\dots\}$ and let our binary operation be concatenation (that is, letting $b\cdot o \cdot y = boy$ as a word that cannot be reduced) then this is a monoid if we also have an "empty letter" in our set such that concatenation of an element with this empty letter leaves just the original element.
• For anyone who already knows what a group is: we can consider any group, ring, field, etc., a monoid.

And, now, this is pretty nice.  I mean, monoids are great, aren’t they?  They’re fantastic.  But what happens if something like this were to go down: suppose you took your element $a$ and dotted it with some element $b$ to obtain some element $a\cdot b = c$.  "Oh no!", you think to yourself, "I didn’t mean to do that!  I wish I could just get back to my element $a$!  If there were only some way to "undo" multiplication by $b$!  Alas!"

Unfortunately, in a monoid, inverses are not guaranteed.  But in many of our normal number systems, we have inverses: in the integers under addition, we have that $2 + (-2) = 0$; in the real numbers without zero under multiplication, we have that $a \cdot \frac{1}{a} = 1$; and in many other sets we have ways of "inverting" elements to get back to the identity.  It should seem reasonable, then, that an even nicer structure than a monoid would have inverses included in it!

We finally define a group to be a monoid with inverses for each of its non-identity elements.  That is, for every $a\in M$ that’s not an identity element, we have an element $a^{-1}$ such that $a\cdot a^{-1} = e$ where $e$ is our identity element.  For some reason, we usually use the letters $G$ and $H$ to stand for groups.

Alright, so, let’s sum up here.  A group $G$ is a structure such that:

• $G$ is closed under some associative binary operation $\cdot$.
• $G$ has an identity element with respect to the binary operation $\cdot$.
• $G$ has an inverse for every one of its non-identity elements.

Equivalently, if we wanted to be brief, we could say that if $G$ is a group, then:

• $\exists \cdot:G\rightarrow G$ such that $\forall a,b,c\in G$ we have $(a\cdot b)\cdot c = a\cdot (b\cdot c)$
• $\exists e\in G$ such that $\forall x\in G$ we have $x\cdot e = x$.
• $\forall x\in G$, $\exists x^{-1}\in G$ such that $x\cdot x^{-1} = e$.

Groups are ridic nice, and there’s tons of studies devoted strictly to groups.  In the one of the next posts, I’ll go over some common examples of groups (or, at least, I’ll link to some) and I’ll go over some basic theorems relating to the study of groups.

Eventually, we will take the elements in some group and make them into nicely interacting matrices.  We’ll be able to apply some nice linear algebra theorems from here.  Sweet.