September 6, 2012
This post has a pretty weird title, but the problem is easy to state and uses a few interesting mathematical concepts. It’s worth going through. Let’s start with the basics.
Problem 1. Let . Show that is a polynomial for each and that the degree of the polynomial is .
Indeed, for example, we have that , as we learned in Calculus, and this is a polynomial of degree 2. Similarly, , which is a polynomial of degree 3. In the same respect, , which is a polynomial of degree 4.
The associated polynomials in this case are given by Faulhaber’s formula:
Theorem (Faulhaber). For we have .
This formula looks terrifying, but it is not hard to apply in practice. You may be wondering, though, what the ‘s in this formula stand for. These are the strange and wonderful Bernoulli numbers, of course! I always enjoy seeing these creatures, because they unexpectedly pop up in the strangest problems. There are a number of ways to define these numbers, one of which is to just write them out sequentially, starting with :
But in this case it is not so easy to guess the next value. The clever reader will notice that all of the odd numbered Bernoulli numbers (except the first) are zero, but other than that there does not seem to be a clear pattern. Fortunately, we can construct a function which generates the values as coefficients; we’ll call this function (surprise!) a generating function.
Definition. We define the sequence by
Notice that this will, in fact, generate the as coefficients times . Neat. In practice, you can use a program like Mathematica to compute for pretty large values of ; but, of course, there are lists available. We can now use Faulhaber’s formula above, which gives us (assuming we have proven that the formula holds!) that the sums of powers of natural numbers form polynomials of degree .
But something else happens that’s pretty interesting. Let’s look at some of the functions.
Look at the coefficients in each of these polynomials. Anything strange about them? Consider them for a bit.
Problem. Look at the coefficients. What do you find interesting about them? Note that, in particular, for a fixed , the coefficients of the associated polynomial sum to 1. Convince yourself that this is probably true (do some examples!) and then prove that it is true. Do this before reading the statements below.
Anecdote. I spent quite a while trying to write down the "general form" of a polynomial with elementary symmetric polynomials and roots to try to see if I could prove this fact using some complex analysis and a lot of terms canceling out. This morning, I went into the office of the professor to ask him about what it means that these coefficients sum up to 1. He then gave me a one-line (maybe a half-line) proof of why this is the case.
Hint. What value would we plug in to a polynomial to find the sum of the coefficients? What does plugging in this value mean in terms of the sum?
August 7, 2012
An interesting question came up during my studying today which made me think about some of the different ways we can think about polynomials. The solution to this problem was not immediately obvious to me, and, in fact, it wasn’t until I looked up a completely unrelated problem (in a numerical methods book!) that some solution became clear.
Question: Suppose that is a sequence of polynomials with each of degree and suppose pointwise. Show that is also a polynomial of degree no more than .
Some interesting points come up here. First is that we only have pointwise convergence — it wasn’t even immediately obvious to me how to prove the resulting limit was continuous, let alone a polynomial of some degree. Second, we know very little about the polynomials except for what degree they are. This should be an indication that we need to characterize them with respect to something degree-related.
Indeed, polynomials can be represented in a few nice ways. Among these are:
- In the form where it is usually stated that .
- In terms of their coefficients. That is, if we have a list of polynomials of degree 3 to store on a computer, we could create an array where the first column is the constant, the second is the linear term, and so forth. This is sort of like decimal expansion.
- Where they send each point. That is, if we know what is equal to for each , we could recover it.
- If a polynomial is of degree then, somewhat surprisingly, we can improve upon the previous statement: if we know the value of for distinct points, then we can find , and is the unique such polynomial of that degree which has those values. (Note that if we were to have points and a polynomial of degree , then many polynomials of this degree could fit the points. Consider, for example, and . Then we have one points and we want to fit line through it. Clearly this can be done in infinitely many ways.)
This last one is going to be useful for us. So much so that it might be a good idea to prove it.
Lemma. Let be distinct points in , and let be distinct points in . Then there is a unique polynomial of degree at most such that for each considered here.
Proof. This is an exercise in linear algebra. We need to solve the system of linear equations
where spans , for the constants . Notice that this is simply plugging into a general polynomial of degree . Notice that the matrix that this forms will be a Vandermonde matrix. Since each is distinct, the determinant of this matrix is nonzero, which implies that there is a unique solution. This gives us our coefficients, and note that this is a polynomial not necessarily of degree exactly , since some coefficients may be 0, but it is at most .
[Note: For those of you who forgot your linear algebra, the end of this goes like this: if we let our coefficients be denoted by the column matrix and our Vandermonde matrix is denoted by , then we want to solve where is the column vector with entries . If has non-zero determinant, then it is invertible, and so we have that gives us our coefficients.]
Neato. But now we need to specialize this somewhat for our proof.
Corollary. Let the notation and assumptions be as in the last lemma. For , let be the unique polynomial of degree at most with (where if and ). Then every polynomial of degree at most is of the form for each .
This might be a bit more cryptic, so let’s do an example. Let’s let so that we have two points. Let’s say and . Then we have is the unique polynomial of degree at most 1 such that and . Of course, this function will be . Now and ; this gives us that . The theorem now states that any polynomial of degree at most can be written in the form
For example, let . Then the lemma says , as we’d expect. The power of this lemma will become clear when we use this in the solution. The proof of this corollary is just a specialization of the previous lemma, so we exclude it.
Solution. Recall, just for notation, that our sequence pointwise. Let’s let be our distinct points, as usual. In addition, let’s let be defined as in the corollary above. Represent each as follows:
for each . Here comes the magic: let and note that at every point, so, in particular, on each and on . We obtain
for each . But this is the sum of polynomials of degrees at most , which gives us that is itself a polynomial of degree at most .
I’ll admit, I did a bit of digging around after finding the lemma above; in particular, this corollary representation of polynomials seems to be a nice way to represent a polynomial if we do not know the coefficients but do know the values at a certain number of points and have that its degree is bounded below that number of points.
Exercise: Try to write out this representation for and . If you’re a programmer, why not make a program that allows you to input some points and some values and spits out the polynomial from the corollary above?
May 18, 2012
[Note: It’s been a while! I’ve now completed most of my pre-research stuff for my degree, so now I can relax a bit and write up some topics. This post will be relatively short just to “get back in the swing of things.”]
In Group Theory, the big question used to be, “Given such-and-such is a group, how can we tell which group it is?”
The Sylow Theorems (proved by Ludwig Sylow, above) provide a really nice way to do this for finite groups using prime decomposition. In most cases, the process is quite easy. We’ll state the theorems here in a slightly shortened form, but you can read about them here. Note that subgroup which is of order for some is unsurprisingly called a -subgroup. A -subgroup of maximal order in is called a Sylow -subgroup.
Theorem (Sylow). Let be a group such that for . Then,
- There exists at least one subgroup of order .
- The Sylow -subgroups are conjugate to one-another; that is, if are Sylow -subgroups, then there is some such that . Moreover, for all , we have that is a Sylow -subgroup.
- The number of Sylow -subgroups of , denoted by , is of the form . In other words, divides .
This first part says that the group of Sylow -subgroups of is not empty if divides the order of . Note that this is slightly abbreviated (the second part is actually more general, and the third part has a few extra parts) but this will give us enough to work with.
Problem: Given a group for prime and , is ever simple (does it have any nontrivial normal subgroups)? Can we say explicitly what is?
We use the third part of the Sylow theorems above. We note that and , but so this immediately implies that (why?). So we have one Sylow -subgroup; let’s call it . Once we have this, we can use the second part of the Sylow theorem: since for each we have is a Sylow -subgroup, but we’ve shown that is the only one there is! That means that ; this says is normal in . We have, then, that isn’t simple. Bummer.
On the other hand, we can actually say what this group is. So let’s try that. We know the Sylow -subgroup, but we don’t know anything about the Sylow -subgroups. We know that and , but that’s about it. There are two possibilities: either or .
For the first case, by using the modular relation, if does not divide then this forces ; this gives us a unique normal Sylow -subgroup . Note that since the orders of our normal subgroups multiply up to the order of the group, we have ; in other words, .
For the second case, . We will have a total of subgroups of order and none of these are normal. This part is a bit more involved (for example, see this post on it), but the punch line is that it will be the cyclic group .
I’ll admit that the last part is a bit hand-wavy, but this should at least show you the relative power of the Sylow theorems. They also come in handy when trying to show something either does or does not have a normal subgroup. Recall that a simple group has no nontrivial normal subgroups.
Question. Is there any simple group with ?
I just picked this number randomly, but it works pretty well for this example. We note that . Let’s consider, for kicks, . We know must divide and it must be the case that ; putting these two facts together, we get . This immediately gives us a normal subgroup of order 11, which implies there are no simple groups of order 165.
Question. Is there any simple group with ?
Alas, alack, you may say that 777 is too big of a number to do, but you’d be dead wrong. Of course, . Use the same argument as above to show there are no simple groups of this order.
Question. Is there any simple group with ?
Note that so we need to do a little work, but not much. Just for fun, let’s look at . We must have that it is 1 modulo 7 and it must divide . Hm. A bit of thinking will give you that , which gives us the same conclusion as above.
Of course, there are examples where this doesn’t work nicely. Think about the group of order 56, for example. In order to work with these kinds of groups, one must do a bit more digging. We will look into more of this later.
December 20, 2011
I’m going through a few books so that I can start doing lots and lots of problems to prepare for my quals. I’ll be posting some of the “cuter” problems.
Here’s one that, on the surface, looks strange. But ultimately, the solution is straightforward.
Problem. Find an uncountable subset such that has empty interior.