November 20, 2011
After proving this "Deep Result" [cf. Munkres] a professor will (hopefully) say something like: "Yes, the proof is important, but what does this theorem mean? And what does it mean for spaces which are sufficiently nice, like metric spaces?"
Let’s state the result just so we’re all on the same page.
Theorem. The topological space is normal (that is, every pair of disjoint closed subsets can separated by disjoint neighborhoods) if and only if any two disjoint closed sets can be separated by a function.
The proof of this is not obvious. In fact, it is quite involved. But if happens to be a metric space, we can make the proof significantly easier by using a function that is similar to a distance function with a few minor modifications. I call this function the standard Urysohn function for metric spaces for lack of a better (or shorter) name. The function is as follows:
but before proving this theorem in the metric space setting, let’s look at this function. If are disjoint, then it is clear that the denominator is non-zero (why?) so this is defined everywhere. If we have that this function evaluates to 0. If then we have that this function evaluates to 1. And the function (since the distance is always non-negative) achieves values in for every point not in or (convince yourself that this function is continuous and only achieves the values 0 and 1 if the point is in either or respectively). This function, therefore, separates and .
The next thing you should think about is: what do the preimages of look like? Draw some pictures! Here’s some pictures to start you off to get the idea. The first is just in and the second is in the real plane. Both have the standard topology.
This one is potentially not to scale, but you’ll see that we essentially have a linear relation (since the sum in the denominator will always be 1 in this case) except when we start looking at points that are less than all the points in or greater than all the points in . What happens in those cases?
I’ve left this "face" as an exercise because when I did it I was kind of excited about the result. What do the preimages look like? Does this look like something you’ve seen before; maybe in physics? Could you modify the Urysohn equation above to make it seem MORE like something in physics?
So now that you’ve seen these pre-images, it should be relatively clear how to create neighborhoods around each of the open sets. It still takes a bit of proof, but it’s nowhere near the difficulty of the standard Urysohn’s.
I encourage you, further, to draw some pictures. My rule of thumb is: draw at least five different pictures; four easy ones and a hard one. Find out where the preimages are in each of these. Remember, too, that not every metric space looks the same; what would this look like, for example, in the taxi-cab space? Or in some product space? Or in the discrete space…?
November 6, 2011
I’ve been working on a problem (here is a partial paper with some ideas) that’s really easy for any calculus student to understand but quite difficult for even wolfram alpha to work out some cases. Here’s the idea:
We know that . It’s not hard to reason this out (there are some relatively obvious inequalities, etc.), but I wanted to know what happened if we considered something like:
It turns out, this goes to infinity. Maybe this is not so surprising. But, to balance this out, I thought maybe I could add another factorial on the bottom. What about
where this double factorial is just with another factorial at the end. It turns out, this one goes to 0.
The problem here is that after , Mathematica doesn’t seem to be able to handle the sheer size of these numbers. Consequently, I only have a few values for this. I’ve included everything I have in a google-doc PDF (the only way I can think to share this PDF), and I’m looking for suggestions. Here’s some things I thought of:
- Stirling’s formula. Unfortunately, this starts to get very complicated very quickly, and if you consider subbing it in for even it can take up a good page of notes. It also doesn’t reduce as nicely as I’d like.
- Considering the Gamma function. It may be easier to work with compositions of the gamma function since it is not discrete and we may be able to use some sort of calculus-type things on it.
- Number Crunching. For each of these cases, it seems like there is a point where either the numerator or the denominator "clearly" trumps over the other; this is not the "best" method to use, but it will give me some idea of which values potentially go to infinity and which go to zero.
- Asymptotics. I’m not so good at discrete math or asymptotics, so there may be some nice theorems (using convexity, maybe?) in that field that I’ve just never seen before. Especially things like: if then under such-and-such a condition.
Feel free to comment below if you think of anything.