Recently, I’ve been learning to program in a new language and have been doing Project Euler problems for practice — of course, as the name suggests, most of these problems deal explicitly with problems which must be solved (efficiently) with mathematical techniques.  Two of the most common algorithms I’ve used are: Prime Testing, GCD Finder.  I’ll post about the former later, but the latter is an interesting problem in its own right:

 

Initial Problem.  Given two natural (positive whole) numbers called m, n, can we find some other natural number that divides both of them?

 

This problem is a first step.  It’s nice to be able to write numbers as a multiple of some other number; for example, if we have 16 and 18, we may write them as 8\times 2 and 9\times 2, thus giving us some insight as to the relationship between these two numbers. In that case it may be easy to see, but perhaps if you’re given the numbers 46629 and 47100, you may not realize right away that these numbers are 99\times 471 and 100\times 471 respectively.  This kind of factorization will reveal "hidden" relationships between numbers.  

So, given two numbers, how do we find if something divides both of them — in other words, how do we find the common divisors of two numbers?  If we think back to when we first began working with numbers (in elementary school, perhaps) the first thing to do would be to note that 1 divides every number.  But that doesn’t help us all that much, as it turns out, so we go to the next number: if both numbers are even, then they have 2 as a common factor.  Then we "factor" both numbers by writing them as 2\times\mbox{ something} and then attempt to keep dividing things out of the something.  We then move onto 3, skip 4 (since this would just be divisible by 2 twice), go onto 5, then 7, then…and continue for the primes.  This gives a prime factorization, but we have to note that if, say, 2 and 5 divide some number, then so does 10.  These latter divisors are the composite factors.

This seems excessive, but it is sometimes the only way one can do it. 

Anecdote!: On my algebra qualifying exam, there was a question regarding a group of order 289 which required us to see if 289 was prime or not; if not, we were to factor it.  We were not allowed calculators, so what could we do?  Try everything.  Note that we only need to try up to the square root of the number (which we could estimate in other ways), but it’s still a number of cases.  If you check, none of the following numbers divide into 289: 2, 3, 5, 7, 11, 13.  At this point, I was about to give up and call it a prime, but, for whatever reason, I decided to try 17.  Of course, as the clever reader will have pointed out, 289 = 17\times 17.  It is not prime.  There was, luckily, only one student who thought it was prime, but it points out how the algorithm above is not entirely trivial if one does not have access to a computer or calculator. 

 

Once we have a common divisor, or a set of common divisors, a natural thing to want to do is to find the biggest (we already have the smallest, 1) since in this way we can write our numbers with the largest common factor multiplied by some other number.  It will, in effect, make things prettier.

 

Real Problem.  Find the greatest divisor which is common to two natural numbers, m, n.

 

If you were just learning about this kind of thing, you may spout out the following solution: find all of the common divisors, then pick the greatest.  While this is not especially efficient, it is a solution.  Unfortunately, even for small numbers, this gets out of hand quickly.  For example, 60 and  420 have the following common divisors: 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60.  This takes a while to compute by hand. 

Even if we were to find prime factors, this would be 60 = 2^2 \times 3\times 5 and 420 = 2^2 \times 3 \times 5\times 7, which gives us that they share a number of prime factors.  A bit of thinking gives us that we take all of the prime factors they "share" and multiply them together to get the greatest common divisor.  This is another potential solution which is much faster than simply listing out all of the common divisors.  Unfortunately, this falls prey to the same kind of trap that other prime-related problems do: it is, at times, especially difficult to factor large composite numbers.  For example, the "reasonably small" number 49740376105597 has a prime factorization of 2741 \times 37813 \times 479909; this is not at all efficient to factor if one does not have a computer or a specialized calculator with a factoring algorithm on it.  As a mean joke, you may ask your friend to factor something like 1689259081189, which is actually the product of the 100,000th and 100,001st prime — that is, they would need to test 99,999 primes before getting to the one which divides the number.  If they divided by one prime per second (which is quite fast!) this would take them 1 day, 3 hours, and 46 minutes.  Not especially effective, but it will eventually get the job done.

 

Real Problem, With Efficiency: Find the greatest divisor which is common to two natural numbers, m,n, but do so in an efficient manner (we’ve all got deadlines!).

 

We need to sit down and think about this now.  We need an entirely new idea.  We note, at least, that for the two numbers m,n that one of them must be larger than the other (or else the problem is trivial).  One thing to try would be to see if the smaller one goes into the larger one (for example, above we had 60 going into 420, which gave us the easy solution that 60 must be the greatest common divisor).  If not, maybe we can see how much is left over.  That is, if m is the larger number,

m = a_{1}n + r_{1}

where here a_{1} is the number of times n goes into m without exceeding it, and r_{1} is the "remainder"; if it’s equal to 0, then n evenly divides into m, and otherwise it is less than n (or else we could divide an additional n into m). 

Using this, if r_{1}\neq 0, we may write m - a_{1}n = r_{1}; this means that, in particular, r_{1} divides m and a_{1}n, so it is a factor of m and of a_{1}n.  But it may not actually be a factor of n; so let’s see how many times it goes into n.  Using the same process…

n = a_{2}r_{1} + r_{2}

and by rearranging, we have that n - a_{2}r_{1} is divisible by r_{2}.  So, n is divisible by r_{2}, but we aren’t sure if r_{1} is divisible by r_{2}…if it were, we would be able to say that r_{2} was a common divisor of m and n (why?).  That’s something at least. 

The cool thing about our algorithm here is that, because a_{1}n + r_{1} = m we have that either r_{1} = 0 and we’re done with the algorithm, or r_{1} > 0 and we may form a new equation n = a_{2}r_{1} + r_{2}; this equation has, on the left-hand side, the number n which is less than the previous equation’s left-hand side, which was m.  Continuing this process, we will have r_{1}, r_{2}, \dots on the left-hand side, each of which is less than the one which came before it.  Because r_{i} \geq 0 for any of the remainders, eventually it will become 0 (why?) and this algorithm will terminate.  That is, we will have found some r_{i} which is a common divisor for both n, m; specifically, it will be the r_{i}\neq 0 such that r_{i+1} = 0 (or, it may simply be n if n divides m).

This algorithm, called the Euclidean Algorithm, actually does more "automatically": it not only finds a common divisor, but actually finds the greatest common divisor of m,n, which, from now on, we will denote \gcd(m,n).  The "proof" of this is simply noting that \gcd(m,n) = \gcd(n,r_{1}) = \gcd(r_{1},r_{2}) = \cdots = \gcd(r_{i-1},r_{i}) = r_{i} (we noted this above without making reference to the gcd, but the reader should attempt to go through all the same steps using the idea of the gcd). 

So.  If you have two natural numbers, n,m, you divide them, find the remainder, write the equation, then continue as above until you get a 0 remainder.  Then you pick the remainder directly before you got 0 as your gcd (or, you pick the smaller number if one number divides the other).  Pretty simple algorithm, but is it efficient?

Without going into formal "efficiency" definitions, "yes", it is quite efficient.  To prove it, let’s take an "average" example using the "large" numbers 1337944608 and 4216212.  We note that (by pen and paper, or by using a standard calculator) that

1337944608 = 317(4216212) + 1405404.

Next, we note that

4216212 = 3(1405404) + 0

which instantly gives us the solution \gcd(4216212, 1337944608) = 1405404.  That’s pretty awesome.  Note that this was an especially quick trial, but even the "worst" ones are relatively quick. 

 

Unexpected Corollary!:  For n,m natural numbers, if \gcd(n,m) = k then there exists integers a,b such that an + bm = k.

 

This is more useful than you might think at first glance, and we’ll get into why in a later post, but what’s nice about this corollary is that it comes "for free" from the Euclidean algorithm.  Note that, since k divides n, m, it suffices to prove this corollary for an + bm = 1 where n, m have \gcd(n,m) = 1.  The proof uses induction on the number of steps of the Euclidean algorithm for those numbers, but for those of you who are more experienced and know modular arithmetic, you may enjoy the following simple proof:

 

"Clever" Proof of the Corollary: Let m > n (for equality, the proof is easy).  We will only care about remainders in this proof, so we will look at some numbers modulo m.  Consider

 

r_{1} = n\mod m

r_{2} = 2n\mod m

\vdots

r_{m-1} = (m-1)n\mod m

 

Note there are exactly m-1 remainders here and that the remainder 0 never occurs (since m,n are relatively prime).  Suppose that r_{i} \neq 1 for each of the i; that is, the remainder 1 does not ever show up in this list.  By the pigeon-hole principle (as there are m - 1 remainders but only m -2 possible values for the remainders) we must have that r_{i} = r_{j} for some i\neq j.  That is, we have

in \mod m = jn \mod m

which implies

(i-j)n\mod m = 0

but this is impossible, since it implies that either m = 0 or n is some integer multiple of m, but m > 0 and we have assumes m,n are relatively prime.  Hence, the remainder 1 must occur.  That is, r_{c} = 1 for some c and

cn \mod m = 1.

But what does this mean?  It means that there is some integer a such that am - cn = 1.  To make this prettier, let b = -c and we find that there exists a,b integers such that am + bn = 1, as required.  \Box

 

Pretty slick, no?

This post will require some very basic knowledge of category theory (like, what a category is, and how to make a poset into a category).  For everything below, I will be a bit informal, but I will essentially mean that A, B are objects in a category, and f:A\to B is some morphism between them which is also in the category.

 

The "natural" extension of the notion of a surjective map (in, say, the category of sets) is

 

Definition.  A map f:A\to B is an epimorphism if, for each object Z and map g,g':B\to Z we have that if g\circ f = g'\circ f then g = g'.

 

You should prove for yourself that this is, in fact, what a surjective map "does" in the category of sets.  Pretty neat.  Similarly, for injective maps (in, say, the category of sets) we have the more general notion:

 

Definition. A map f:A\to B is a monomorphism if, for each object Z and map g,g':Z\to A we have that if f\circ g = f\circ g' then g = g'.

 

Again, you should prove for yourself that this is the property that injective mappings have in the category of sets.  Double neat.  There is also a relatively nice way to define an isomorphism categorically — which is somewhat obvious if you’ve seen some algebraic topology before.

 

Definition. A map f:A\to B is an isomorphism if there is some mapping g:B\to A such that f\circ g = 1_{B} and g\circ f = 1_{A}, where 1_{A},1_{B} denote the identity morphism from the subscripted object to itself.

 

Now, naively, one might think, "Okay, if I have some certain kind of morphism in my category (set-maps, homomorphisms, homeomorphisms, poset relations, …) then if it is an epimorphism and a monomorphism, it should automatically be an isomorphism."  Unfortunately, this is not the case.  Here’s two simple examples.

 

Example (Mono, Epi, but not Iso).  The most simple category for which this works is the category 2, which I’ve drawn below:

image

There are two objects, a,b and three morphisms, the identites and the morphism f:a\to b.  First, prove to yourself that this is actually a category.  Second, we note that f:a\to b is an epimorphism: the only map from A\to A is the identity, and there is no mapping from b\to a, so the property trivially holds.  Third, we note that f:a\to b is a monomorphism for the exact same reason as before.  Last, we note that f is not an isomorphism: we would need some g: b\to a which satisfied the properties in the definition above…but, there is no map from b\to a.  Upsetting!  From this, we must conclude that f cannot be an isomorphism despite being a mono- and epimorphism.

 

 

Similar Example (Mono, Epi, but not Iso).  Take the category ({\mathbb N}, \leq), the natural numbers with morphisms as the relation \leq.  Which morphisms are the monomorphisms?  Which morphisms are the epimorphisms?  Prove that the only isomorphisms are the identity morphisms.  Conclude that there are a whole bunch of morphisms which are mono- and epimorphisms but which are not isomorphisms.

[Note: It’s been a while!  I’ve now completed most of my pre-research stuff for my degree, so now I can relax a bit and write up some topics.  This post will be relatively short just to “get back in the swing of things.”]

 

In Group Theory, the big question used to be, “Given such-and-such is a group, how can we tell which group it is?”

 

https://i2.wp.com/upload.wikimedia.org/wikipedia/commons/9/9b/Ludwig_Sylow.jpg

 

The Sylow Theorems (proved by Ludwig Sylow, above) provide a really nice way to do this for finite groups using prime decomposition.  In most cases, the process is quite easy.  We’ll state the theorems here in a slightly shortened form, but you can read about them here.  Note that subgroup which is of order p^{\beta} for some \beta is unsurprisingly called a p-subgroup. A p-subgroup of maximal order in G is called a Sylow p-subgroup.

 

Theorem (Sylow).  Let G be a group such that |G| = p^{\alpha}m for p\not| m.  Then,

  1. There exists at least one subgroup of order p^{\alpha}.
  2. The Sylow p-subgroups are conjugate to one-another; that is, if P,Q are Sylow p-subgroups, then there is some g\in G such that gPg^{-1} = Q.  Moreover, for all g\in G, we have that gPg^{-1} is a Sylow p-subgroup.
  3. The number of Sylow p-subgroups of G, denoted by n_{p}, is of the form n_{p} \equiv 1\mbox{ mod } p.  In other words, n_{p} divides m.

 

This first part says that the group of Sylow p-subgroups of G is not empty if p divides the order of G.  Note that this is slightly abbreviated (the second part is actually more general, and the third part has a few extra parts) but this will give us enough to work with.

 

Problem: Given a group  |G| = pq for p,q prime and p < q, is G ever simple (does it have any nontrivial normal subgroups)?  Can we say explicitly what G is?

 

We use the third part of the Sylow theorems above.  We note that n_{q} | p and n_{q} \equiv 1\mbox{ mod } q, but p < q so this immediately implies that n_{q} = 1 (why?).  So we have one Sylow q-subgroup; let’s call it Q.  Once we have this, we can use the second part of the Sylow theorem: since for each g\in G we have gQg^{-1} is a Sylow q-subgroup, but we’ve shown that Q is the only one there is!  That means that gQg^{-1} = Q; this says Q is normal in G.  We have, then, that G isn’t simple.  Bummer.

On the other hand, we can actually say what this group is.  So let’s try that.  We know the Sylow Q-subgroup, but we don’t know anything about the Sylow P-subgroups.  We know that n_{p} \equiv 1\mbox{ mod }p and n_{p} | q, but that’s about it.  There are two possibilities: either n_{p} = 1 or n_{p} = q.

For the first case, by using the modular relation, if p does not divide q-1 then this forces n_{p} = 1; this gives us a unique normal Sylow p-subgroup P.  Note that since the orders of our normal subgroups multiply up to the order of the group, we have PQ \cong P\times Q \cong G; in other words, G \cong {\mathbb Z}_{p}\times {\mathbb Z}_{q}.

For the second case, n_{p} = q.  We will have a total of q subgroups of order p and none of these are normal.   This part is a bit more involved (for example, see this post on it), but the punch line is that it will be the cyclic group {\mathbb Z}_{pq}.

 

I’ll admit that the last part is a bit hand-wavy, but this should at least show you the relative power of the Sylow theorems.  They also come in handy when trying to show something either does or does not have a normal subgroup.  Recall that a simple group has no nontrivial normal subgroups.

 

Question.  Is there any simple group with |G| = 165?

 

I just picked this number randomly, but it works pretty well for this example.  We note that |G| = 165 = 3\cdot 5\cdot 11.  Let’s consider, for kicks, n_{11}.  We know n_{11} must divide 3\cdot 5 = 15 and it must be the case that n_{11} \equiv 1\mbox{ mod } 11; putting these two facts together, we get n_{11} = 1.  This immediately gives us a normal subgroup of order 11, which implies there are no simple groups of order 165.

 

Question.  Is there any simple group with |G| = 777?

 

Alas, alack, you may say that 777 is too big of a number to do, but you’d be dead wrong.  Of course, 777 = 3\cdot 7\cdot 37.  Use the same argument as above to show there are no simple groups of this order.

 

Question.  Is there any simple group with |G| = 28?

 

Note that 28 = 2^{2}\cdot 7 so we need to do a little work, but not much.  Just for fun, let’s look at n_{7}.  We must have that it is 1 modulo 7 and it must divide 2^{2} = 4.  Hm.  A bit of thinking will give you that n_{7} = 1, which gives us the same conclusion as above.

 

Of course, there are examples where this doesn’t work nicely.  Think about the group of order 56, for example.  In order to work with these kinds of groups, one must do a bit more digging.  We will look into more of this later.

Baire Category Theorem.

January 5, 2012

During one of my previous qualifying exams, one of the questions looked, at first, to be a basic Fourier Analysis question.  The hint, though, was to use the Baire Category Theorem.  The problem and hint looked so unrelated that the phrase has become somewhat of a running joke for some of us in the department (“How are we going to prove this is a cofibration?”  “Oh, easy, just use the Baire Category theorem.”).

The more I read about it, the more I realized that the Baire Category theorem pops up much more than I realized and in a number of surprising and diverse topics.  This post is just to introduce you to the BCT and to show off some of its raw power.  One of the results below is one that I genuinely don’t believe — that is, I know it has to be true (it was proved!) but it just seems so outrageous that I can’t being myself to believe it!  But, we’ll get to that.

 

Leading up to the Theorem.

There are a few different formulations of the theorem, so I’ll pick the one that turns out to be most common in what I’ve read.  First, let’s quickly define some terms.

 

Definition.  A complete metric space is a metric space where each Cauchy sequence converges to a point in the space.

 

For example, {\mathbb R} is complete, but {\mathbb Q} is not complete since in the latter space we can take a sequence like: 1, 1.4, 1.41, 1.414, … which are all rational numbers that converge to \sqrt{2}, but \sqrt{2} isn’t in the rational numbers.

 

Definition.  A subset of a space is dense if the closure of the subset is the entire space.

 

For example, {\mathbb Q} is dense in {\mathbb R} since the closure of {\mathbb Q} is {\mathbb R}.  The irrationals are also dense in {\mathbb R}.  The integers, {\mathbb Z}, are not dense in {\mathbb R} because the closure of the integers in {\mathbb R} is just {\mathbb Z} (why?).

 

Q: What are the dense subsets of a non-empty discrete space?  Does this make sense “visually”?

 

Definition.  The interior of a subset Y is the largest open set contained in Y.  In metric spaces, it’s the union of all the balls contained in Y.

 

The next definition might be a little strange if you haven’t seen it before and, in fact, there are two nice (equivalent) ways of thinking about this concept.  I’ll list both.

 

Definition.  A set Y is nowhere dense in some space X if either (and hence both):

  1. The interior of the closure of Y in X has empty interior.
  2. \overline{Y^{C}} = X; that is, the closure of the compliment of Y is dense in X.

 

This last definition might seem a bit strange to you, but the “image” of this is the following: a nowhere dense set is literally a set which is not dense anywhere — but what does that mean?  Density isn’t usually considered a local property.  How do we usually check density, though?  We take the closure of the set and if it’s the whole space then we say the set is dense.  If we wanted to make this more “local”, what we could say is that we don’t get any neighborhoods of the whole space.  This is the same as saying the closure of our subset has no open neighborhoods inside of it or that it has empty interior.  This givers us the first definition.  The second is a direct consequence which is sometimes more useful to think about depending on your original subset.

 

Example.  We have {\mathbb Z}\subseteq {\mathbb R} with the standard topology.  Is {\mathbb Z} nowhere dense in {\mathbb R}?  Note that \bar{{\mathbb Z}} = {\mathbb Z} and this has empty interior.  By the first definition, {\mathbb Z} must be nowhere dense in {\mathbb R}.  If we wanted to use the second definition, we could say that the compliment of {\mathbb Z} in {\mathbb R} is just everything but the integers; but the closure of this is {\mathbb R} which means the compliment of {\mathbb Z} is dense in {\mathbb R} giving us the same conclusion.  This is not surprising, since the elements of {\mathbb Z} are “pretty far apart” in {\mathbb R}.

 

Example.  Our favorite limit-point example, Y = \{\frac{1}{k}\,|\, k\in {\mathbb N}\} is a subset of {\mathbb R}.  Is it nowhere dense?  Using the first definition, the closure is just Y\cup \{0\} which has empty interior (why?).  With the second definition, the compliment of Y has closure equal to {\mathbb R} and so it is dense.  In either case, this proves that Y is nowhere dense in the reals.

 

Example.  The ball (0,1)\subseteq {\mathbb R}.  The closure of this set is [0,1] which has interior equal to (0,1).  Therefore, this set is not nowhere dense.  That’s a bit sad.  But you can see that this set really is “somewhere dense” visually.

 

Example.  The Cantor set is closed (there are a number of cute ways to prove this) so to show it is nowhere dense in [0,1] it suffices to show that it has empty interior by the first definition.  Prove it!

 

You may have been surprised with the last example, or even the \frac{1}{k} example.  For the latter one, it seems like the points are really “getting close” to 0, so maybe some density should occur; it’s not the case, though!  For the Cantor set, many points in this set can be “very close” to one-another so it is a bit surprising that this set is not dense anywhere.  If one goes through the construction slowly, you’ll see why the points are “just far enough apart” to reasonably be considered (that is, intuitively considered) nowhere dense.

 

The Theorem.

Why would we ever care about nowhere dense sets?  Well, they’re kind of crappy in the following sense: even if we unioned a countable number of nowhere dense subsets together, we could never recover the original set.  An analogy from anthropology would go like this: we might be able to find tons of copies of some ancient book, but if each of them are missing a whole lot (nowhere dense) then even combining them all together we can’t recover the original manuscript.

 

Oh.  And why do jigsaw puzzle makers make each of the puzzle pieces look like that?

https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Jigsaw_Puzzle.svg/641px-Jigsaw_Puzzle.svg.png

Because if they made them nowhere dense, we would never be able to finish the puzzle.  Buh-dum-tsh.

 

Theorem (Baire Category Theorem).  A non-empty complete metric space is not the countable union of nowhere dense sets.

 

This shouldn’t be too scary to you at this point.  Let’s do some immediate consequence examples.

 

Example.  Do you think we could union a countable number of things that look like {\mathbb Z} together and get something like the real numbers?  That is, maybe we could take

{\mathbb Z} = \{\dots, -1,0,1,2,\dots\}

{\mathbb Z} + \frac{1}{2} = \{\dots, -\frac{1}{2}, \frac{1}{2}, \frac{3}{2}, \frac{5}{2},\dots\}

{\mathbb Z} + \frac{1}{3} = \{\dots, -\frac{2}{3}, \frac{1}{3}, \frac{4}{3}, \frac{7}{3},\dots\}

and so on, and union them all together to make {\mathbb R}.

Nope.  The BCT says to not even try.  Each of these translates of {\mathbb Z} is nowhere dense, so a countable union of them would not get you {\mathbb R}.

 

Non-Example.  What if we did the same thing as above, but we wanted to union them all together to get {\mathbb Q}, the rationals.  What would Baire say about that?  Probably, “Don’t use my theorem with this, because {\mathbb Q} isn’t a complete metric space.”

 

Non-Example.  What if we tried the same thing with the {\mathbb Z} translates above as subsets of {\mathbb R}, but we union them all and then union that with the irrational numbers.  In this case, too, we can’t use Baire, because the irrational numbers are actually dense in the reals; the BCT doesn’t say anything about unions with dense subsets.

 

Example.  What if we have a complete metric space with no isolated points?  Can it be countable?

(Recall, an isolated point is one which has a neighborhood not containing any other point of the space.  For example, if we took \{0\}\cup [1,2] in the subspace topology of the reals, then \{0\} is an isolated point of this subspace since it, itself, is open and it contains no other point of the subspace.)

In this case, we can cleverly use the BCT to show that every complete metric space with no isolated points is uncountable.  Suppose it were countable.  Then every point of the space is nowhere dense (!  Why is this the case?  What is the closure of a point?  What is its interior?  Does this argument show why we cannot have isolated points?).  But since our space is countable, it is the countable union of its points, each of which is nowhere dense.  Baire would have a real problem with that.  This contradicts BCT, so such a space must be uncountable.

 

Why is it called the “Category” theorem?

This “category” isn’t the same category as in category theory.  The term comes from the original language that the theorem was stated in.  It’s a bit ugly, so I’ve yet to mention it, but I might as well spell it out for you now:

  • We call a set of First Category if it can be written as a countable union of nowhere dense sets.  This is also called a meagre set.  (Amusingly, the meagre is also the name of a fish.)  Sometimes this is Americanized (that is, bastardized) and pronounced as “meager set” which would mean something like, “thin set” or “set having little substance” which is, intuitively, what this set should “look” like.
  • We call a set of Second Category if it is not of first category.

 

We also have what’s called a Baire Space, which is just a space where every non-empty open set is of second category.  This means, intuitively, that an open set needs to have some “density” to it.  The original Baire Category theorem stated that every complete metric space is a Baire Space.  This is “almost” equivalent to what we have above; see below.

 

Other Formulations of the BCT.

There are three distinct formulations of the BCT that are generally given (the texts  I have in my office give at most two, but the third is a consequence of the first so this is perhaps why).  Right above this, we give one of them.

 

BCT (1).  Every complete metric space is a Baire space; that is, every open subset in a complete metric space is not the countable union of nowhere dense sets.

BCT (3).  Every complete metric space is not the union of nowhere dense sets.

 

I list these together because it’s clear that the third is a consequence of the first.  (These numbers are because the second BCT is actually called “the second BCT” in a few places.)

 

BCT (2).  Every locally compact Hausdorff space is a Baire space.  Recall that locally compact means that every point of our space has a compact neighborhood.

 

Note that it’s not the case that the first BCT implies the second, and it’s not the case that the second implies the first.  It would be nice if there were some meta-BCT that implied both of them, but when we write down exactly what we need for something to be a Baire space, we’re left with a couple of choices; if you’re interested in this, you should check out the proofs for both of these theorems and see where we’re using locally compact verses where we’re using complete.

A Surprising Result.

One of the coolest results I’ve seen using the BCT (and, indeed, one of the coolest results I’ve seen perhaps in general) is the following.

 

Maybe, first, it’s nice to ask the following question (that you can even ask to your friends!): what’s your idea of a “general” continuous function?  That is, just think of the most generic continuous function you can.  Visualize it.  Maybe draw it.  What do you have?  Probably some sort of polynomial.  (When I do this, I actually draw fifth degree polynomials most of the time.)  Is this really so generic?

 

First, let’s just define exactly what we mean by “generic.”

 

Definition.  If A is a subset of some space X, the compliment of A is nowhere dense, and all points in A share some property, then we call this property generic of X.

 

This should be reasonable: something genertic should be representative of the whole.  That is, if something is not generic, then it should not happen very often; in our language, the set of non-generic things should be nowhere dense.

 

Theorem (Banach, Mazurkiewicz, 1931).  The generic (in the sense of the above definition) continuous function on a compact interval (that is, f\in C^{0}([a,b],{\mathbb R})) is nowhere differentiable, and not monotone on any subinterval of [a,b].

 

Just think about that for a second.  Let it sink in.  Nowhere differentiable is strange enough; we can (and will) use the BCT to talk about a nowhere differentiable function.  But, what’s more, it’s not monotone on any subinterval, no matter how small.  And, yes, this function is generic.  What.

 

As a note, somewhat strangely, because of the above definition of “generic” it is entirely possible to have two different kinds of “generic” things representing some space.  The definition doesn’t say anything about the generic property being unique and, in general, it isn’t.

 

 

BCT in Functional Analysis.

to do.

Nested Kind of Topology.

October 16, 2011

In the last post, I posted a topology on \{1,2,3,\dots, k-1\} where the open sets were

\{\emptyset , \{1\}, \{1,2\}, \{1,2,3\},\dots, \{1,2,3,\dots, k-1\}\}

You can check that this is, in fact, a topology and it, in fact, has exactly k elements.  But this is also a really nice topology for intro-topology students.  Why?  Because it has some strange properties which are easy to get your hands on and prove!  Let’s show some of these.  For the sake of brevity, let’s call this topology P.

 

P is not Hausdorff.

This is easy to see.  Pick the elements 1 and 2.  The only set which contains 1 but not 2 is \{1\}, but there is no set which contains 2 but not 1. 

 

P is T_{0}.

Two points are topologically indistinguishable if they have exactly the same neighborhoods; a T_{0} space is one where each point is topologically distinguishable.  It is trivial to check that this holds.

 

P is connected.  (!)

This is somewhat surprising to me (especially because of how disjoint it "seems"!) but if we attempt to find a separation we find that it is impossible: every open set contains 1 except the empty set, so, in particular, no two open sets have trivial intersection unless one is empty.

 

A function f:{\mathbb R}\rightarrow P is continuous if and only if the sets of all x which map to n\in \{1,2,3,\dots, k-1\} are open (in the standard topology on {\mathbb R}

This is simply taking preimages.  I mention it because actually DRAWING the function on the "plane" (where the "x-axis" is {\mathbb R} and the "y-axis" is \{1,2,3,\dots, k-1\} we get something which does not at all look continuous (unless it is constant —).  This is perhaps a good exercise for intro topology students to work out so that they can see that not all continuous maps "look" continuous.  Of course, there are many other ways to see this using other spaces.

 

Limits?  What about the sequences in this set?  Let’s remind ourselves what convergence is:  \{x_{n}\}\rightarrow x if for every open set U containing x there is some N such that for all n\geq N we have x_{n} \in U.  Note then that:

\{1,1,1,1,1,\dots\} converges to 1.

\{2,2,2,2,2,\dots\} converges to 2.

and so on.  But then we have

\{1,2,1,2,1,\dots\} which converges to 2, but not to 1 (why?).

\{1,2,3,1,2,3,\dots\} which converges to 3, but not to 2 or 1 (why?).

and so forth.  In fact, if we have an infinite number of the numbers n_{1}, n_{2}, \dots, n_{r}\in \{1,2,\dots, k-1\} then this sequence will only converge to \max(n_{1}, n_{2}, \dots, n_{r}) which is kind of a neat characterization.  In fact, this takes care of most (all?) of the sequences. 

 

Limit Points?  A limit point p of a subset A is a point such that for every open U containing p we have A \cap (U-\{p\}) \neq \emptyset

Consider the subset A = \{1\} and let’s see if there are any limit points.  What about 2?  Well, we have that \{1,2\} - \{2\} = \{1\} so the intersection of this with A is nontrivial.  Similarly, the other open sets containing 2 have nontrivial intersection with A which includes more than the element 2.  What about 3?  Same deal.  And 4?  Yes.  So we can now ask: what’s the closure of \{1\}

Consider A = \{1,2\} and we will start looking at the element 3.  Is 3 a limit point?  For the same reasons above, it is.  So what is the closure of this set? 

Let’s be a little adventurous and ask what about A = \{1, 3\}.  For the same reasons as above, 4,5,6\dots, k-1 are all limit points, but what about 2?  In fact, it is.

Last, (and this will complete our "characterization" of the closures of our subsets) what about A = \{2,3\}.  Clearly we will have 4, 5, 6, \dots, k-1 as limit points, but the odd one out is 1.  And 1 is not a limit point (why not?).  That’s a bit sad.  So, taking the least element in the subset allows us to describe the closure: it’s the set of every element greater than or equal to it.

 

Closed sets.  It’s a bit interesting to think up what sets we got from taking the closures of each subset, and then to just start with our open sets and take compliments.  Do we get the same thing?  Neato.

 

Compactness?  A bit trivial in this case.  Why?  (Maybe extending to infinite things would make this cooler…)

 

Here’s something weird, too: what is a path in this space?  What is a loop?  How could we construct something like the fundamental group for this?  Homology groups?  To those with more experience, these questions might be easy; but it is not obvious to me (at least at this point!) how these constructions should go. 

The following question came up today after a brief and somewhat unsatisfying talk I gave on the intuition behind topologies:

 

Question: Given n\in {\mathbb N}, can we construct a topology \tau such that \tau has exactly n elements?

 

The solution wasn’t obvious to me until I looked at the pattern of numbers on the back of a thing of Dried Papaya Chunks.  For n = 1, 2 we can simply take \emptyset and \{1\} to be our sets, but for n = k we need to be a bit more clever.  But not much more.  Consider:

 

\{\emptyset ,\{1\},\{1,2\},\{1,2,3\},\dots \{1,2,3,\dots, k-1\}\}.

This is a topology on \{1,2,\dots, k-1\} with exactly k elements. 

 

In these examples, the elements which are nested inside each other and is somewhat reminiscent of the "particular point" topology — the topology in which a set is open if and only if it contains some predetermined point (or is empty).  In the finite case, how many elements are in the particular point topology?  Let’s do a few examples. 

 

If our universe is \{1\}, there are two open sets in the particular point topology (empty set and the whole set).

If our universe is \{1,2\}, then we have three open sets in the particular point topology.

If our universe is \{1,2, 3\}, then we have five open sets in the particular point topology.

If our universe is \{1,2,3,4\}, then we have nine open sets in the particular point topology.

If our universe is \{1,2,3,4,5\}, then we have seventeen open sets in the particular point topology.

 

If we exclude the empty set, it seems like we will have 2^{k-1} elements in our particular point topology.  Why is this? 

(Hint: Take the last example above with seventeen open sets and suppose the element 5 is your particular point.  Write down all the open sets containing 5.  Now take your pen and scribble out the 5’s in each of those open sets — do the resulting sets look familiar?  Hm.)

 

Neat.  If you take other finite topologies, are there any neat patterns that arise when you increase the number of digits?  Is it possible to associate some sort of polynomial or invariant with finite topologies?  With infinite topologies?  Would this even be useful?

"Why can’t you have a retract of the disk onto its boundary circle?"

This is the question we will attempt to answer in this post.  Along the way, we will climb treacherous mountains and dive deep into dark waters — most frightening, perhaps: we will find things that we didn’t even know we were looking for.

Read the rest of this entry »

The Sierpinski Space.

March 22, 2011

What’s the weakest separation we can have in a topological space?

Well, “no separation” is pretty weak.  But this creates the trivial topology and that’s a bit boring.  So let’s say this:

 

Definition: A topological space X is T_{0} or Kolmogorov if for every two points x,y\in X we have that there exists a neighborhood U such that either x\in U and y \notin U or y\in U and x\notin U.

 

In other words, a space is T_{0} if for every pair of points there is at least one open set which contains one and doesn’t contain the other.  This is a pretty weak separation condition.  Certainly, every Hausdorff space is T_{0}, but there are ones which are even weaker which satisfy this condition.  Let’s try to construct a really easy one.

Read the rest of this entry »

[This homology adventure will, unfortunately, have no pictures.  I want to make some, but my bamboo pad is being irritable.]

Here’s the game: Take the plane.  I remove, say, n points from the plane.  You tell me the homology groups of the resulting space.

Read the rest of this entry »

In this post, we’re going to do something that I thought helped me through homology, but that others no doubt think is a huge waste of time: we’re going to explicitly compute the homology groups of trivial or nearly-trivial things explicitly, without appealing to any theorems regarding homeomorphisms or the like.

Recall the things that we’ve done so far.  The steps to find the homology groups in this post (and, more generally, in life) will be as follows:

  1. Find the chain groups and their generators.
  2. Compute explicitly the image and kernel of the boundary maps.
  3. Find a nice way to express the kernel and image of these boundary maps by potentially using different, but equivalent, generators (we’ll get to this step soon; it’ll make more sense then).
  4. Find the quotient \displaystyle \frac{Ker(\delta_{i-1})}{Im(\delta_{i})}, which is equal to the (i-1)-th cellular homology group.
  5. Reflect on this solution: does it make sense?  Is it nice?  Do we love it?

So without further delay, let’s dig right in.

Read the rest of this entry »