Homology Primer 2: Cycles and Boundaries.

December 28, 2010

Two questions usually immediately spring to mind when you are introduced to some mathematical topic: "why should I care?" and, assuming you do care, "how do I use it?"  With homology, we can either have a serious answer or a silly answer; I prefer the latter, so whenever people ask me what I do, I tell them I spent five years studying math so that I can officially say that a donut has one more hole than a sphere.

If you already know about the fundamental group, then you might be saying to yourself, "Alright, I already know how to tell things with holes apart.  We have the fundamental group for that.  And if the fundamental group doesn’t work, then we have higher homotopy groups.  Why do we need homology?"  It turns out that homotopy groups are actually quite difficult to calculate, even for the most simple structures: n-spheres.  In fact, it turns out that even if our sphere is only n dimensional, we can have nontrivial \pi_{r}(S^{n}) for r > n.  That’s kind of crazy!  We’d like it if our topological invariant was a little bit easier to handle.

It turns out that there’s not that difficult to get from homotopy groups to homology groups.  Consider the following CW-complex (two 0-cells, two 1-cells) with an orientation induced by the gluing; let’s call it X just for the sake of naming it.  We make this by gluing the 1-cell a from v_{1} to v_{2}, and this little red arrow on the side shows this orientation.  Similarly, we glue b from v_{2} to v_{1}.

image What’s the fundamental group of this CW-complex?  Well, note, we need a basepoint.  So, let’s make our basepoint v_{1}.  Alright, so what is \pi_{1}(X,v_{1})?

Well, consider the loop that goes up around a and then down around b.  Let’s call this look ab.  What if we ran this loop in reverse?  It would go up b, so in the opposite direction of b‘s orientation, and then down a, so opposite a‘s orientation; this loop would look like b^{-1}a^{-1}.  Now, these loops start at v_{1}, but they’re essentially the same loop as if we started at v_{2} and went down b and then up a.  All of these loops are really just the same loop, regardless of the basepoint.  We really shouldn’t care about where the loops start, just that we can’t "squish them down" to make them trivial. 

This last statement deserves a note: when can we find a homotopy that squishes a loop down into a point?  The idea is that there can’t be a hole in the middle of it.  The idea, then, is that we need a nice hole-less place for the nullhomotopy to happen; thus, inside our loop, we need there to be something homeomorphic to D^{2}.  Think about this: if our loop is the boundary of a disk, then we can shrink it down. 

imageLook at the first circle.  Because it is the boundary of the disk, we can homotope the disk to a point and the boundary will follow.  But what about the second one?  Nope.  There’s a hole.  The best we can do is homotope it down to S^{1}, and that’s not trivial.  Damn. 

But we’ve struck something really important here.  In terms of cells, what is a disk?  It’s just a 2-cell!  So, what are we saying here?  If a 1-cell in the form of a loop is the boundary of a 2-cell in the CW-complex, then the 1-cell is nullhomotopic.  If not, then it surrounds some 2-dimensional "hole".

Why does it surround a 2-dimensional hole?  Notice the lower picture in the picture above; our hole is sort of like we cut a 2-cell out, so that’s why I call it a 2-dimensional hole.  Nonetheless, we can find a hole by finding all the loops which are not boundaries of some 2-cell.

But why bother stopping there?  If we have some 2-cell which is a "loop" which is the boundary of a 3-cell (a solid sphere), then we can contract the 2-cell to a point via the map that takes the solid sphere to a point.  Thus, we can only not homotope our 2-cell to a point when it is not the boundary of a 3-cell; in other words, when there is a "3-dimensional hole" (a hole which is of the shape of a sphere) inside the 2-cell.  Why is this harder to see?  What does it mean for a 2-cell to be a loop?

To generalize this idea, we need some machinery.  So, let’s begin by talking about cycles and boundaries.


Cycles and Boundaries.

To facilitate learning, let’s draw a picture first just so we have something to refer to. 


By the way, I call this picture "the mystery", because it is a mystery to me why I use different letters for vertices but just e with a subscript for edges.  Either way, it should be clear what this CW-complex is: it has three 0-cells, affectionately named "a", "b", and "c", and it has three 1-cells which are oriented as the arrows show.

Using our new terms, what kinds of things are loops in this complex? First, instead of the multiplicative notation we used in the introduction, let’s use additive notation to mean "go around in the proper orientation."  Thus, e_{1} + e_{2} means "go around e_{1} and then go around e_{2}."  There’s no real ground-breaking reason for changing from multiplication to addition — it’s a bit easier to read and it anticipates some commutativity that’s coming up.

Notice that e_{1} + e_{2} actually makes a loop!  Thus, we have that

e_{1} + e^{2}

is a loop.  Similarly, going around "twice" gives us:

e_{1} + e^{2} + e^{1} + e^{2}

but at this point we encounter a problem: can we group things together and add like terms as in high school algebra?  Is this equation the same as

2e_{1} + 2e_{2}

and does this last equation even make any sense?  It says to "go around e_{1} two times…but, how can we do that?  We start at a and we go to b, and then we’d need to be back at a again to go around e_{1} the second time! 

It turns out that if we don’t think about these sums as instructions for going around edges, we can actually simplify our calculations if we just combining like terms the way we do in usual high school algebra.  But in order to do this, we need to think about what kinds of sums would make a loop and what kinds wouldn’t — in order to do this, we need to introduce the concept of a chain group.


Chain Groups.

In particular, the things we care about (boundaries and cycles) will have a nice representation if we allow such "adding of like terms" in the sums above.  In fact, this kind of adding makes the set of 1-cells into an abelian group under addition with generators as the edges!  This is called a free abelian group, but all this means is that every element of the group can be written as a sum where each term is an edge e_{i} with a coefficient in {\mathbb Z}.  Let’s call this group the chain group of 1-cells.  In general, this sort of thing is called the i-th chain group.  Let’s just be explicit about this for a second.


Definition: The i-th chain group of a CW-complex X, denoted C_{i}(X), is the free abelian group with each i-cell as a generator.


So, for example, in the picture above (let’s call it X) we have that

C_{1}(X) \cong {\mathbb Z}\oplus {\mathbb Z}\oplus {\mathbb Z}

with generators \{e_{1}, e_{2}, e_{3}\}.  Similarly,

C_{0}(X) \cong {\mathbb Z}\oplus {\mathbb Z}\oplus {\mathbb Z}

with generators \{a, b, c\}.  We usually use the following notation to denote long strings with \oplus‘s in them:

\displaystyle C_{1}(X) \cong \bigoplus_{3}{\mathbb Z}

\displaystyle C_{0}(X) \cong \bigoplus_{3}{\mathbb Z}

and will usually state the generators explicitly.  Also, note that for the picture above,

\displaystyle C_{i}(X) \cong 0

if i \geq 2 since there’s no i cells if i \geq 2.  Sad. 


Back to Boundaries and Cycles.

Now that we have our chain groups, what were we talking about?  That’s right, making loops (1-cycles) for this space.


So, notice that when we went around the circle part in the complex above twice before, we had

2e_{1} + 2e_{2}

once we simplified.  Similarly, we have that

3e_{1} + 3e_{2}

is also a loop (why?).  What about

e_{1} + 2e_{2}?

Well, no.  No matter how you arrange these edges, you can’t make a loop out of them.  Hm.  Well, let’s look at another complex and see if we can’t make a rule out of this.

imageLet’s name some loops here.  (Recall that loops can go around a number of times and even intersect themselves; all they need to do is start and end at the same spot.)  Why don’t you write down some, and then we’ll see if we get some of the same ones.

e_{1} + e_{2}

e_{1} + e_{2} + e_{3} + e_{2} = e_{1} + 2e_{2} + e_{3}

e_{1} - e_{3} + e_{3} - e_{3} - e_{2} = e_{1} - e_{2}

-e_{3} - e_{2}


So there are a number of crazy loops here, but do we see any sort of pattern?  Let’s consider the picture: if we need to start and end at a, then how many times can we "leave" and "go into" a?  If we "leave" a to travel along, say, e_{2}, then we’ve got to come back via some other edge.  Thus, we’ve got to "leave" and "enter" a the same number of times.  Ditto for the other vertices.  What would happen if we "left" a vertex more times than we "entered" it? 

Good, so, how do we talk about leaving and entering vertices in a legit way?  First, notice that our edges are directed.  So, we can say that, say, e_{1} goes from b to a.  Let’s define a function saying just that.  Let’s call it the boundary map which is usually denoted \delta.  Note then that we define that

\delta (e_{1}) = a - b

since it goes from b towards a.  (It really doesn’t matter which order we put these vertices in so long as we’re consistent with all the edges.  We could have just as easily defined the boundary map to be b - a in this case.  If we were doing this rigorously, there is actually a method to choosing the signs of each cell.)  Thus, we similarly have

\delta (e_{2}) = b - a

\delta (e_{3}) = a - b

Alright?  This is important: we’re formalizing the idea that these edges are between these vertices and that they’re directed.  We’ll do more examples with this in the next post when we do concrete calculations.  I’m also going to state without proof that \delta is a linear function (convince yourself of this by calculating something like \delta (e_{1} + 4e_{2})).  Let’s note something really cool now.  Take the \delta of the loops we wrote down before:

\delta( e_{1} + e_{2}) = \delta (e_{1}) + \delta (e_{2}) = (a - b) + (b - a) = 0

\delta( e_{1} + 2e_{2} + e_{3}) = \delta (e_{1}) + 2\delta (e_{2}) + \delta (e_{3})

= (a - b) + 2(b - a) + (a - b) = 0

\delta (-e_{3} - e_{2}) = -\delta (e_{3}) - \delta (e_{2}) = -(a-b) - (b-a) = 0

Okay, a pattern seems to be emerging here.  We keep getting that our boundary map is equal to zero.  What about if we took something that wasn’t a loop?  Would that be zero too? 

\delta (e_{1} + e_{2} + e_{3}) = \delta (e_{1}) + \delta (e_{2}) + \delta (e_{3}) = (a - b) + (b - a) + (a - b) = a - b \neq 0

This should convince you in a relatively non-rigorous way that some element in our chain group C_{1}(X) is a loop if and only if the boundary map is equal to zero.  The idea of why this has to be true is that argument about "entering" and "leaving" above. 

In fact, this is true for any chain group, C_{i}(X)


Definition: We call an element \alpha\in C_{i}(X) an icycle (or just a cycle when the dimension is clear) if \delta (\alpha ) = 0  In other words, \alpha is an i-cycle if \alpha\in Ker (\delta ), the kernel of the boundary map.


Good?  So, in particular, we have that our loops here are "really" called 1-cycles.  We use this new term because what would a "loop" of 2-cells look like?  Yeah.  Pretty crazy.

Also, especially take note of that last part, because it’s the part that we’re actually going to use for concrete computations.  The kernel of the boundary map turns out to be a really useful space for computing homology groups.


What about Boundaries?

Remember when we were talking about holes in spaces if we had some sort of 1-cell in the form of a loop which wasn’t the boundary of a 2-cell? Above, we noted that if a 1-cell in the form of a loop surrounded a 2-cell, then we could just homotope both of them to a point; this would mean no hole!  But if the loop did not surround a 2-cell, then we couldn’t homotope it down to a point.  Sad.  The preceding argument gives us a way to say this more precisely:

If an icycle is not the boundary of some (i+1)-cell, then there is a "i-dimensional hole" in the complex.


If \alpha\in C_{i}(X) is not the boundary of some (i+1)-cell, then there is a "i-dimensional hole" in the complex.

This second statement is a bit more precise, but how can we tell that something is the boundary of something else?  Well, why don’t we just take the boundary map of every single (i+1)-cell and see if our cycle is one of them.  Specifically, we consider \delta (C_{i+1}(X)) which we write as Im (\delta_{i+1} ), and then see if any of our cycles match up.

(Note: the subscript on the boundary map tells us which cells we’re taking the boundary of.  Before, when we were considering just 1-cells, our boundary map was "really" \delta_{1}.  Also, the "Im" stands for "Image.")

Thus, we can say that we consider every element in the kernel of \delta_{i} and see if any are in the image of \delta_{i+1}.  In group theoretic terms, we’re looking at the quotient group

\displaystyle \frac{Ker (\delta_{i})}{Im (\delta_{i+1})}

and, in fact, we give this quotient a name.  It’s called the i-th (cellular) homology group of X.  This turns out to be the quantity that we really care about, since it will tell us "how many holes" the complex has. 

(Advanced Users Note: For those of you out there who are going to whine that I’ve never actually shown that this image is a subgroup of this kernel, you can see this by using the equation for the boundary map that a number of books — in particular, Hatcher — to show that (\delta_{i} \circ \delta_{i+1})(C_{i+1}(X)) = 0 for each i.  This shows that the image is a subgroup of the kernel.)

This is actually not as hard as it seems.  We have generators for each chain group and we have linear maps, which means we can use linear algebra to find the image and quotients as well as bases for both of these.  Kind of neat, no lies.  We’ll stop here, but we’re painfully close to finding the homology groups of complexes — this, though, we’ll take up in the next post!


Where do we go from here?

The next post will be just a whole slew of examples, beginning with some extremely basic examples (a point, a line, the circle, the disk, the sphere, the 2-ball, etc.) and progressing to slightly more difficult examples (the torus, the Klein bottle, real projective space, etc.).  After that post, I’ll begin to give some neat examples out as homework with the solutions included.  From there, we’ll see what theorems we can prove to make our intuition match our calculations a bit better — like, why should we have to compute the kernel of a 12\times 12 matrix if we know that a solid cube is nullhomotopic? 

5 Responses to “Homology Primer 2: Cycles and Boundaries.”

  1. Anonymous said

    I love it. These are the explanations I have been looking for.

  2. Anonymous said

    This is really elaborate article. Thank you very much.

    If \alpha\in C_{i}(X) is not the boundary of some (i+1)-cell, then there is a “2-dimensional hole” in the complex, do you mean i dimensional hole here?

  3. Anonymous said

    This is great! Thanks!

  4. MPitts said

    Great. As Anonymous said, “These are the explanations I have been looking for”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: