This post has a pretty weird title, but the problem is easy to state and uses a few interesting mathematical concepts.  It’s worth going through.  Let’s start with the basics.

 

Problem 1.  Let S_{\alpha}(n) = 1^{\alpha} + 2^{\alpha} + \dots + n^{\alpha}.  Show that S_{\alpha}(n) is a polynomial for each \alpha\in {\mathbb N} and that the degree of the polynomial is \alpha + 1.

 

Indeed, for example, we have  that 1 + 2 + \dots + n = \frac{n(n+1)}{2}, as we learned in Calculus, and this is a polynomial of degree 2.  Similarly, 1^{2} + 2^{2} + \dots + n^{2} = \frac{n(n+1)(2n+1)}{6}, which is a polynomial of degree 3.  In the same respect, 1^{3} + 2^{3} + \dots + n^{3} = \frac{(n(n+1))^{2}}{4}, which is a polynomial of degree 4.

The associated polynomials in this case are given by Faulhaber’s formula:

 

Theorem (Faulhaber).  For \alpha\in {\mathbb N} we have \displaystyle \sum_{k=1}^{n}k^{\alpha} = \frac{1}{1 + \alpha}\sum_{j=0}^{\alpha}(-1)^{j}\binom{\alpha + 1}{j}B_{j}n^{\alpha + 1 - j}.

 

This formula looks terrifying, but it is not hard to apply in practice.  You may be wondering, though, what the B_{j}‘s in this formula stand for.  These are the strange and wonderful Bernoulli numbers, of course!  I always enjoy seeing these creatures, because they unexpectedly pop up in the strangest problems.  There are a number of ways to define these numbers, one of which is to just write them out sequentially, starting with B_{0}:

 

1, -\frac{1}{2}, \frac{1}{6}, 0, -\frac{1}{30}, 0, \frac{1}{42}, 0, -\frac{1}{30}, 0, \dots

 

But in this case it is not so easy to guess the next value.  The clever reader will notice that all of the odd numbered Bernoulli numbers (except the first) are zero, but other than that there does not seem to be a clear pattern.  Fortunately, we can construct a function which generates the values as coefficients; we’ll call this function (surprise!) a generating function.

 

Definition.  We define the sequence \{B_{j}\}_{j=0}^{\infty} by

\displaystyle \frac{t}{e^{t} - 1} = \sum_{j=0}^{\infty}B_{j}\frac{t^j}{j!}.

 

Notice that this will, in fact, generate the B_{j} as coefficients times \frac{1}{j!}.  Neat.  In practice, you can use a program like Mathematica to compute B_{j} for pretty large values of j; but, of course, there are lists available.  We can now use Faulhaber’s formula above, which gives us (assuming we have proven that the formula holds!) that the sums of powers of natural numbers form polynomials of degree \alpha + 1.

 

But something else happens that’s pretty interesting.  Let’s look at some of the functions.

 

\begin{array}{|c|c|}\hline \alpha & \mbox{Polynomial}\\\hline &\\ 1 & \frac{1}{2}n^{2} + \frac{1}{2}n\\ &\\ 2 & \frac{1}{3}n^{3} + \frac{1}{2}n^{2} + \frac{1}{6}n\\ & \\ 3 & \frac{1}{4}n^{4} + \frac{1}{2}n^{3} + \frac{1}{4}n^{2} \\ & \\ 4 & \frac{1}{5}n^{5} + \frac{1}{2}n^{4} + \frac{1}{3}n^{3} - \frac{1}{30}n\\ & \\ \hline \end{array}

 

Look at the coefficients in each of these polynomials.  Anything strange about them?  Consider them for a bit.

 

Problem.  Look at the coefficients.  What do you find interesting about them?  Note that, in particular, for a fixed \alpha, the coefficients of the associated polynomial sum to 1.  Convince yourself that this is probably true (do some examples!) and then prove that it is true.  Do this before reading the statements below.

 

Anecdote.  I spent quite a while trying to write down the "general form" of a polynomial with elementary symmetric polynomials and roots to try to see if I could prove this fact using some complex analysis and a lot of terms canceling out.  This morning, I went into the office of the professor to ask him about what it means that these coefficients sum up to 1.  He then gave me a one-line (maybe a half-line) proof of why this is the case. 

 

Hint.  What value would we plug in to a polynomial to find the sum of the coefficients?  What does plugging in this value mean in terms of the sum?

Advertisements

I just saw this problem in a book of algebra problems, and I thought it was nice given one of the previous posts I had up here.

 

Problem.  Show that 11, 111, 1111, 11111, … are not squares.

 

You ought to think about this for a bit.  I started by supposing they were squares and attempting to work it out like that; unfortunately, there are some strange things that happen when we get bigger numbers.  But.  You should see a nice pattern with the last two digits.  Click below for the solution, but only after you’ve tried it!

Read the rest of this entry »

After proving this "Deep Result" [cf. Munkres] a professor will (hopefully) say something like: "Yes, the proof is important, but what does this theorem mean?  And what does it mean for spaces which are sufficiently nice, like metric spaces?"

Let’s state the result just so we’re all on the same page.

 

Theorem.   The topological space X is normal (that is, every pair of disjoint closed subsets A, B can separated by disjoint neighborhoods) if and only if any two disjoint closed sets can be separated by a function.

 

The proof of this is not obvious.  In fact, it is quite involved.  But if X happens to be a metric space, we can make the proof significantly easier by using a function that is similar to a distance function with a few minor modifications.  I call this function the standard Urysohn function for metric spaces for lack of a better (or shorter) name.  The function is as follows:

\displaystyle f(x) = \frac{d(x,A)}{d(x,A) + d(x,B)}

but before proving this theorem in the metric space setting, let’s look at this function.  If A,B are disjoint, then it is clear that the denominator is non-zero (why?) so this is defined everywhere.  If x\in A we have that this function evaluates to 0.  If x\in B then we have that this function evaluates to 1.  And the function (since the distance is always non-negative) achieves values in (0,1) for every point not in A or B (convince yourself that this function is continuous and only achieves the values 0 and 1 if the point is in either A or B respectively).  This function, therefore, separates A and B

The next thing you should think about is: what do the preimages of f look like?  Draw some pictures!  Here’s some pictures to start you off to get the idea.  The first is just in {\mathbb R} and the second is in the real plane.  Both have the standard topology.

 

image

This one is potentially not to scale, but you’ll see that we essentially have a linear relation (since the sum in the denominator will always be 1 in this case) except when we start looking at points that are less than all the points in A or greater than all the points in B.  What happens in those cases?

 

image

I’ve left this "face" as an exercise because when I did it I was kind of excited about the result.  What do the preimages look like?  Does this look like something you’ve seen before; maybe in physics?  Could you modify the Urysohn equation above to make it seem MORE like something in physics? 

 

So now that you’ve seen these pre-images, it should be relatively clear how to create neighborhoods around each of the open sets.  It still takes a bit of proof, but it’s nowhere near the difficulty of the standard Urysohn’s.

 

I encourage you, further, to draw some pictures.  My rule of thumb is: draw at least five different pictures; four easy ones and a hard one.  Find out where the preimages are in each of these.  Remember, too, that not every metric space looks the same; what would this look like, for example, in the taxi-cab space?  Or in some product space?  Or in the discrete space…?

I’ve been up for a while doing practice qualifying exam questions, and sometimes I hit a point where I just do whatever it is that comes to my mind first, no matter how tedious or silly it seems.  This is a bad habit.  I’ll show why with an example.

 

Here’s the question.  Let C be the unit circle oriented counterclockwise.  Find the integral

\displaystyle\int_{C}\frac{\exp(1 + z^{2})}{z}\, dz.

 

The sophisticated reader will immediately see the solution, but humor me for a moment.  I attempted to do this by Taylor expansion.  The following calculations were done:

 

= \displaystyle\int_{C} \frac{1 + (1+z^{2}) + (1 + z^{2})^{2} + \cdots}{z}\, dz

 

To which the binomial theorem was applied to the numerator terms to obtain:

 

= \displaystyle\int_{C} \frac{1 + (1+z^{2}) + \frac{\sum_{n=0}^{2}\binom{2}{n}z^{n}}{2!} + \frac{\sum_{n=0}^{3}\binom{3}{n}z^{n}}{3!} + \cdots}{z}\, dz

 

And at this point we note that everything is going to die off when we take the integral except the coefficient of the \frac{1}{z} term.  Our residue (the coefficient) will be:

1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots

which can also be written (slightly more suggestively) as:

\frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots

which we should recognize as the Taylor expansion of e^{z} at the point z = 1.  Nice!  Now we note that plugging in e^{i\theta} to take the contour integral (ignoring all those terms which don’t matter) will force us to integrate

\displaystyle e \int_{0}^{2\pi}\frac{1}{e^{i\theta}}ie^{i\theta}\, d\theta = ie\int_{0}^{2\pi}\, d\theta = 2\pi i e.

Cutely, if we think of the Greek letter \pi as being a "p", this solution spells out "2pie".

 

But now, readers, let’s slow down.  This is, indeed, the correct answer.  But if I had just looked at the form of the integrand, I would have seen an everywhere analytic function divided by a form of z - z_{0}.  This screams Cauchy Integral Formula.  Indeed, according to the CIF, we should get the solution as

2\pi i \exp(1 + 0^2) = 2\pi i e

which is exactly what we got before, but only took about 4 seconds to do.  It’s nice to be able to check yourself by doing something two different ways, but when time isn’t on your side (like in a qualifying exam situation, for example!) then remember:

Think before you Taylor Expand.

[This homology adventure will, unfortunately, have no pictures.  I want to make some, but my bamboo pad is being irritable.]

Here’s the game: Take the plane.  I remove, say, n points from the plane.  You tell me the homology groups of the resulting space.

Read the rest of this entry »