## Pointwise Convergent Sequence of Polynomials.

### August 7, 2012

An interesting question came up during my studying today which made me think about some of the different ways we can think about polynomials.  The solution to this problem was not immediately obvious to me, and, in fact, it wasn’t until I looked up a completely unrelated problem (in a numerical methods book!) that some solution became clear.

Question:  Suppose that $\{f_{n}\}$ is a sequence of polynomials with $f_{n}:{\mathbb R}\to {\mathbb R}$ each of degree $m > 1$ and suppose $f_{n}\to f$ pointwise.  Show that $f$ is also a polynomial of degree no more than $m$

Some interesting points come up here.  First is that we only have pointwise convergence — it wasn’t even immediately obvious to me how to prove the resulting limit was continuous, let alone a polynomial of some degree.  Second, we know very little about the polynomials except for what degree they are.  This should be an indication that we need to characterize them with respect to something degree-related.

Indeed, polynomials can be represented in a few nice ways.  Among these are:

• In the form $f(x) = a_{0} + a_{1}x + \cdots + a_{n}x^{n}$ where it is usually stated that $a_{n}\neq 0$.
• In terms of their coefficients.  That is, if we have a list of polynomials of degree 3 to store on a computer, we could create an array where the first column is the constant, the second is the linear term, and so forth.  This is sort of like decimal expansion.
• Where they send each point.  That is, if we know what $f(x)$ is equal to for each $x$, we could recover it.
• If a polynomial is of degree $m$ then, somewhat surprisingly, we can improve upon the previous statement: if we know the value of $f$ for $m + 1$ distinct points, then we can find $f$, and $f$ is the unique such polynomial of that degree which has those values.  (Note that if we were to have $m+1$ points and a polynomial of degree $k > m$, then many polynomials of this degree could fit the points.  Consider, for example, $m = 0$ and $k = 1$.  Then we have one points and we want to fit line through it.  Clearly this can be done in infinitely many ways.)

This last one is going to be useful for us.  So much so that it might be a good idea to prove it.

Lemma.  Let $x_{1}, \dots, x_{m+1}$ be $m + 1$ distinct points in ${\mathbb R}$, and let $y_{1}, \dots, y_{m+1}$ be distinct points in ${\mathbb R}$.  Then there is a unique polynomial of degree at most $m$ such that $f(x_{i}) = y_{i}$ for each $i$ considered here.

Proof.  This is an exercise in linear algebra.  We need to solve the system of linear equations

$\displaystyle \sum_{n = 1}^{m} a_{n}x_{i}^{n} = y_{i}$

where $i$ spans $1, \dots, m+1$, for the constants $a_{n}\in {\mathbb R}$.  Notice that this is simply plugging $x_{i}$ into a general polynomial of degree $m$.  Notice that the matrix that this forms will be a Vandermonde matrix.  Since each $x_{i}$ is distinct, the determinant of this matrix is nonzero, which implies that there is a unique solution.  This gives us our coefficients, and note that this is a polynomial not necessarily of degree exactly $m$, since some coefficients may be 0, but it is at most $m$$\diamond$

[Note: For those of you who forgot your linear algebra, the end of this goes like this: if we let our coefficients be denoted by the column matrix $b$ and our Vandermonde matrix is denoted by $A$, then we want to solve $Ab = Y$ where $Y$ is the column vector with entries $y_{i}$.  If $A$ has non-zero determinant, then it is invertible, and so we have that $b = A^{-1}y$ gives us our coefficients.]

Neato.  But now we need to specialize this somewhat for our proof.

Corollary.  Let the notation and assumptions be as in the last lemma.  For $I\in \{1, \dots, m+1\}$, let $g_{i}$ be the unique polynomial of degree at most $m$ with $g_{i}(x_{j}) = \delta_{i,j}$ (where $\delta_{i,j} = 0$ if $i\neq j$ and $\delta_{i,i} = 1$).  Then every polynomial $f$ of degree at most $m$ is of the form $\displaystyle f(x) = \sum_{i = 1}^{m+1}f(x_{i})g_{i}(x)$ for each $x\in {\mathbb R}$.

This might be a bit more cryptic, so let’s do an example.  Let’s let $m = 1$ so that we have two points.  Let’s say $x_{1} = 0$ and $x_{2} = 1$.  Then we have $g_{1}$ is the unique polynomial of degree at most 1 such that $g_{1}(0) = 1$ and $g_{2}(1) = 0$.  Of course, this function will be $g_{1}(x) = -x + 1$.  Now $g_{2}(0) = 0$ and $g_{2}(1) = 1$; this gives us that $g_{2}(x) = x$.  The theorem now states that any polynomial of degree at most $1$ can be written in the form

$f(x) = f(0)g_{1}(x) + f(1)g_{2}(x)$

$= f(0)(-x+1)+ f(1)x = (f(1) - f(0))x + f(0)$.

For example, let $f(x) = 2x + 1$.  Then the lemma says $f(x) = (3(1) - 1)x + 1 = 2x + 1$, as we’d expect.  The power of this lemma will become clear when we use this in the solution.  The proof of this corollary is just a specialization of the previous lemma, so we exclude it.

Solution.  Recall, just for notation, that our sequence $\{f_{k}\}\to f$ pointwise.  Let’s let $x_{1}, \dots, x_{m+1}$ be our distinct points, as usual.  In addition, let’s let $g_{1}, \dots, g_{m+1}$ be defined as in the corollary above. Represent each $f_{k}$ as follows:

$\displaystyle f_{k}(x) = \sum_{j = 1}^{m+1}f_{k}(x_{j})g_{j}(x)$

for each $x\in {\mathbb R}$.  Here comes the magic: let $k\to\infty$ and note that $f_{n} \to f$ at every point, so, in particular, on each $x_{i}$ and on $x$.  We obtain

$\displaystyle f(x) = \sum_{j = 1}^{m+1}f(x_{j})g_{j}(x)$

for each $x\in {\mathbb R}$.  But this is the sum of polynomials of degrees at most $m$, which gives us that $f$ is itself a polynomial of degree at most $m$$\Box$

I’ll admit, I did a bit of digging around after finding the lemma above; in particular, this corollary representation of polynomials seems to be a nice way to represent a polynomial if we do not know the coefficients but do know the values at a certain number of points and have that its degree is bounded below that number of points.

Exercise: Try to write out this representation for $m = 2$ and $m = 3$.  If you’re a programmer, why not make a program that allows you to input some points and some values and spits out the polynomial from the corollary above?

## Orthogonal Complement of Even Functions.

### August 5, 2012

Question:  Consider the subspace $E$ of $L^{2}([-1,1])$ consisting of even functions (that is, functions with $f(x) = f(-x)$).  Find the orthogonal complement of $E$.

One Solution.  It’s easy to prove $E$ is a subspace.  Then, there is a representation of any function in this space by adding odd and even functions together; more precisely, given $f\in L^{2}([-1,1])$ we have that $\frac{f(x) + f(-x)}{2}$ is even and $\frac{f(x) - f(-x)}{2}$ is odd and $f(x) = \frac{f(x) + f(-x)}{2} + \frac{f(x) - f(-x)}{2}$.  For uniqueness, note that if $f(x) = f(-x) = -f(x)$, then $f(x) = -f(x)$ for each $x$, giving us that $f(x) = 0$.  Hence, the orthogonal complement of $E$ is the set of odd functions.  $\diamond$

Here’s another solution that "gets your hands dirty" by manipulating the integral.

Another Solution.  We want to find all $g\in L^{2}([-1,1])$ such that $\langle f, g \rangle_{2} = 0$ for every even function $f\in L^{2}([0,1])$.  This is equivalent to wanting to find all such $g$ with $\displaystyle \int_{-1}^{1}f(x)g(x)\, dx = 0$.  Assume $g$ is in the orthogonal complement.  That is,

$\displaystyle 0 = \int_{-1}^{1}f(x)g(x)\, dx = \int_{-1}^{0}f(x)g(x)\, dx + \int_{0}^{1}f(x)g(x)\, dx$

$\displaystyle = -\int_{1}^{0}f(-x)g(-x)\, dx + \int_{0}^{1}f(x)g(x)\, dx$

The last equality here re-parameterizes the first integral by letting $x\mapsto -x$, but note that our new $dx$ gives us the negative sign.

$\displaystyle = \int_{0}^{1}f(-x)g(-x)\, dx + \int_{0}^{1}f(x)g(x)\, dx$

$\displaystyle = \int_{0}^{1}f(x)g(-x)\, dx + \int_{0}^{1}f(x)g(x)\, dx$

$\displaystyle = \int_{0}^{1}f(x)g(-x) + f(x)g(x)\, dx = \int_{0}^{1}f(x)(g(x) + g(-x))\, dx$.

We may choose $f(x) = g(x) + g(-x)$ since this is an even function, and we note that this gives us

$\displaystyle 0 = \int_{0}^{1}(g(x) + g(-x))^{2}\, dx$.

Since $(g(x) + g(-x))^{2}\geq 0$, it must be the case that $(g(x) + g(-x))^{2} = 0$.  [Note: The fact that this is only true "almost everywhere" is implicit in the definition of $L^{2}$.]  Hence, $g(x) + g(-x) = 0$, giving us that $-g(x) = g(-x)$

We now have one direction: that if $g$ is in the orthogonal complement, then it will be odd.  Now we need to show that if $g$ is any odd function, it is in the orthogonal complement.  To this end, suppose $g$ is an odd function.  Then by the above, we have

$\displaystyle \langle f, g \rangle_{2} = \int_{-1}^{1}f(x)g(x)\, dx = \int_{0}^{1}f(x)(g(x) + g(-x))\, dx = 0$

where the last equality comes from the fact that $g$ is odd.  $\diamond$