An interesting question came up during my studying today which made me think about some of the different ways we can think about polynomials.  The solution to this problem was not immediately obvious to me, and, in fact, it wasn’t until I looked up a completely unrelated problem (in a numerical methods book!) that some solution became clear.

Question:  Suppose that \{f_{n}\} is a sequence of polynomials with f_{n}:{\mathbb R}\to {\mathbb R} each of degree m > 1 and suppose f_{n}\to f pointwise.  Show that f is also a polynomial of degree no more than m

 

Some interesting points come up here.  First is that we only have pointwise convergence — it wasn’t even immediately obvious to me how to prove the resulting limit was continuous, let alone a polynomial of some degree.  Second, we know very little about the polynomials except for what degree they are.  This should be an indication that we need to characterize them with respect to something degree-related.

Indeed, polynomials can be represented in a few nice ways.  Among these are:

 

  • In the form f(x) = a_{0} + a_{1}x + \cdots + a_{n}x^{n} where it is usually stated that a_{n}\neq 0.
  • In terms of their coefficients.  That is, if we have a list of polynomials of degree 3 to store on a computer, we could create an array where the first column is the constant, the second is the linear term, and so forth.  This is sort of like decimal expansion.
  • Where they send each point.  That is, if we know what f(x) is equal to for each x, we could recover it.
  • If a polynomial is of degree m then, somewhat surprisingly, we can improve upon the previous statement: if we know the value of f for m + 1 distinct points, then we can find f, and f is the unique such polynomial of that degree which has those values.  (Note that if we were to have m+1 points and a polynomial of degree k > m, then many polynomials of this degree could fit the points.  Consider, for example, m = 0 and k = 1.  Then we have one points and we want to fit line through it.  Clearly this can be done in infinitely many ways.)

 

  This last one is going to be useful for us.  So much so that it might be a good idea to prove it.

 

Lemma.  Let x_{1}, \dots, x_{m+1} be m + 1 distinct points in {\mathbb R}, and let y_{1}, \dots, y_{m+1} be distinct points in {\mathbb R}.  Then there is a unique polynomial of degree at most m such that f(x_{i}) = y_{i} for each i considered here.

 

Proof.  This is an exercise in linear algebra.  We need to solve the system of linear equations

\displaystyle \sum_{n = 1}^{m} a_{n}x_{i}^{n} = y_{i}

where i spans 1, \dots, m+1, for the constants a_{n}\in {\mathbb R}.  Notice that this is simply plugging x_{i} into a general polynomial of degree m.  Notice that the matrix that this forms will be a Vandermonde matrix.  Since each x_{i} is distinct, the determinant of this matrix is nonzero, which implies that there is a unique solution.  This gives us our coefficients, and note that this is a polynomial not necessarily of degree exactly m, since some coefficients may be 0, but it is at most m\diamond

 

[Note: For those of you who forgot your linear algebra, the end of this goes like this: if we let our coefficients be denoted by the column matrix b and our Vandermonde matrix is denoted by A, then we want to solve Ab = Y where Y is the column vector with entries y_{i}.  If A has non-zero determinant, then it is invertible, and so we have that b = A^{-1}y gives us our coefficients.]

 

Neato.  But now we need to specialize this somewhat for our proof. 

 

Corollary.  Let the notation and assumptions be as in the last lemma.  For I\in \{1, \dots, m+1\}, let g_{i} be the unique polynomial of degree at most m with g_{i}(x_{j}) = \delta_{i,j} (where \delta_{i,j} = 0 if i\neq j and \delta_{i,i} = 1).  Then every polynomial f of degree at most m is of the form \displaystyle f(x) = \sum_{i = 1}^{m+1}f(x_{i})g_{i}(x) for each x\in {\mathbb R}.

 

This might be a bit more cryptic, so let’s do an example.  Let’s let m = 1 so that we have two points.  Let’s say x_{1} = 0 and x_{2} = 1.  Then we have g_{1} is the unique polynomial of degree at most 1 such that g_{1}(0) = 1 and g_{2}(1) = 0.  Of course, this function will be g_{1}(x) = -x + 1.  Now g_{2}(0) = 0 and g_{2}(1) = 1; this gives us that g_{2}(x) = x.  The theorem now states that any polynomial of degree at most 1 can be written in the form

f(x) = f(0)g_{1}(x) + f(1)g_{2}(x)

= f(0)(-x+1)+ f(1)x = (f(1) - f(0))x + f(0).

For example, let f(x) = 2x + 1.  Then the lemma says f(x) = (3(1) - 1)x + 1 = 2x + 1, as we’d expect.  The power of this lemma will become clear when we use this in the solution.  The proof of this corollary is just a specialization of the previous lemma, so we exclude it.

 

Solution.  Recall, just for notation, that our sequence \{f_{k}\}\to f pointwise.  Let’s let x_{1}, \dots, x_{m+1} be our distinct points, as usual.  In addition, let’s let g_{1}, \dots, g_{m+1} be defined as in the corollary above. Represent each f_{k} as follows:

\displaystyle f_{k}(x) = \sum_{j = 1}^{m+1}f_{k}(x_{j})g_{j}(x)

for each x\in {\mathbb R}.  Here comes the magic: let k\to\infty and note that f_{n} \to f at every point, so, in particular, on each x_{i} and on x.  We obtain

\displaystyle f(x) = \sum_{j = 1}^{m+1}f(x_{j})g_{j}(x)

for each x\in {\mathbb R}.  But this is the sum of polynomials of degrees at most m, which gives us that f is itself a polynomial of degree at most m\Box

 

I’ll admit, I did a bit of digging around after finding the lemma above; in particular, this corollary representation of polynomials seems to be a nice way to represent a polynomial if we do not know the coefficients but do know the values at a certain number of points and have that its degree is bounded below that number of points. 

 

Exercise: Try to write out this representation for m = 2 and m = 3.  If you’re a programmer, why not make a program that allows you to input some points and some values and spits out the polynomial from the corollary above?

Advertisements

This is a really sweet deal.  In particular, if we know that our matrix is orthogonal, we can cut down on time finding the inverse significantly.  Combined with the spectral theorem (which states that if the matrix is symmetric, there is an orthogonal matrix S such that S^{-1}AS is a diagonal matrix with entries the eigenvalues) this gives us a tool for finding diagnalizations of matrices. 

Read the rest of this entry »

Linear Algebra is strange.  On the surface, we have a ton of tricks that we can apply to things to make calculations nicer (diagonalizing matrices, finding orthonormal bases,…) but deeper down a lot of things connect to one-another in really unexpected ways — to me, anyway!

Here’s the problem.  We have an n\times n matrix called M, and we have n-1 eigenvalues.  We have ALMOST every eigenvalue, but we’re just missing one.  What can we do about this?

Read the rest of this entry »

QR-Factorization.

October 28, 2010

While I was studying for a linear algebra exam, I discovered a deep-seeded love for QR-Factorization.  I’m not going to explain why this is important, or why we should care about such a factorization; in fact, I have no idea why this is important or why I should care about this.  I was directed to this, so I’ll direct you there too.

Anyhow, here’s the game.  Given some square matrix (this is not necessarily, but it makes it easier) A, we want to decompose A into an orthogonal matrix Q and an upper-triangular matrix R

Read the rest of this entry »

On a linear algebra exam, I was asked something like, “If A is a real n\times n matrix and A^{2} is invertible, prove that A is also invertible.”

Apparently, a number of students attempted the problem this way:

A^{2} * (A^{2})^{-1} = A * (A * A^{-1}) * A^{-1} = A * A^{-1} = 1

and concluded that A had an inverse.  What’s wrong with this argument?  The astute reader will note that we’re actually assuming that A^{-1} exists to prove its existence!  That’s a little circular.  Instead, we’re going to prove something slightly more general.

Read the rest of this entry »

I mean, okay, we wrote about the real spectral theorem, but how much do we really know about it?  A lot, I hope!  I hope you remember that we needed T to be self-adjoint in order to imply that we have an orthonormal basis of eigenvectors with respect to T!

Read the rest of this entry »

Okay, so, last time we talked about the spectral theorem for complex vector spaces.  What did it say?  Do you remember?  Don’t look.  Fine, look.  Either way, it said that we have an orthonormal basis made of eigenvectors of some linear map T if and only if T is normal.  Now, being normal is not that big of’a deal.  It just means that TT^{\ast} = T^{\ast}T.  Not a biggie, right?  Yeah.

Read the rest of this entry »

(Note) So, the general spectral theorem is pretty sweet, but (as Sheldon Axler does in Linear Algebra Done Right, the book that I’m essentially following in this blog) I’m going to split it up into two parts. In “real” math, I suppose we should consider two cases: when the field is algebraically closed and when it is not.  The algebraically closed case is going to be nearly identical to the complex case.  But because we don’t know “how” algebraically closed the other field is, I’m not entirely certain that the “not algebraically closed” case follows from the Reals case of the theorem.  For example, if we were to use the integers in place of the reals, we would most likely be able to produce examples which did not follow the Reals version of the spectral proof.  Either way…we will mostly be using this “in real life” in the case that the field is either the reals or the complexes.  Thus, I do not feel too bad for not proving this in its full generality.

So, let’s wonder something for a second: why have I been proving all these random things?  What the hell were we looking for again?

Read the rest of this entry »

This part is going to be really exciting, no lies.  In fact, we’re really just going to prove one or two theorems and that’ll be that.  The reason for doing so is because one of these theorems is so elegant and beautiful that I want you to focus on it.  Specifically, the fact that the matrix associated to an adjoint linear map is the conjugate transpose of the matrix associated to the original linear map.  Best.

Read the rest of this entry »

My next post is going to be this nice proof of the fact that if we have M(T) for some linear map T: V\rightarrow V for V is a nontrivial finite vector space, then M(T^{\ast}) is really easy to find: it’s simply the conjugate transpose of M(T).  This is exactly what it sounds like: we find the transpose of M(T) (by switching a_{i,j} with a_{j,i} for all i and j) and then replacing every element in the matrix with its conjugate.  I guess the notation should be (M(\overline{T}))^{t} = M(T^{\ast}).  Which is pretty complicated looking, but it’s really not hard to do at all.  In fact, the point of this post is to give a detailed example.  Actually, let’s do two examples, and you’re going to find that half of the time this is much easier to do than what I’ve said above.

Read the rest of this entry »