Morera’s Theorem.

October 29, 2010

Morera’s theorem, named after the mathematician Giacinto Morera whose name is pretty sweet but is second only to his ultra-fly mustache


is an extremely important result in complex analysis: it states that if f is a continuous function defined on an open set D in the complex plane, and, in addition, we have that the integral around every closed curve is zero for every closed curve C in X, then, in fact, f must be complex differentiable everywhere in D.  This is a common tool to use in the proofs of other theorems, as well as in its own right showing that a continuous function is actually much nicer than “just” continuous.

Read the rest of this entry »



October 28, 2010

While I was studying for a linear algebra exam, I discovered a deep-seeded love for QR-Factorization.  I’m not going to explain why this is important, or why we should care about such a factorization; in fact, I have no idea why this is important or why I should care about this.  I was directed to this, so I’ll direct you there too.

Anyhow, here’s the game.  Given some square matrix (this is not necessarily, but it makes it easier) A, we want to decompose A into an orthogonal matrix Q and an upper-triangular matrix R

Read the rest of this entry »

Wordy Introduction, Motivation.

When you first start high school algebra, the big thing is FOIL-ing, right?  Factoring and factorizing quadratics.  When you get to calculus, the big things are derivatives and integrals.  Then when you get to college and start doing math, things get a little tougher.  We start learning about abstract structures, and these become increasingly specific and increasingly complex as we go along.

Read the rest of this entry »

Here’s the cute proof of the week.  Liouville’s Theorem (in complex analysis) is coming up (because this theorem is so important to me, I’m trying to scrounge up a lot of applications for it) and so I wanted to just give a cute little corollary that comes directly from Liouville’s Theorem.  First, let me just state Liouville really quickly:


Theorem (Liouville).  A function from f:{\mathbb C}\rightarrow {\mathbb C} which is bounded (in the sense that |f(z)| < M for some real number M) and entire (complex differentiable) everywhere is constant.


Now, let’s talk about Bump Functions for a second.  A bump function is a function f:{\mathbb R}^{n} \rightarrow {\mathbb R} which is bounded, smooth, and has compact support (it is zero everywhere but on compact set), but we can generalize this to complex functions by simply changing the domain and range to the complex numbers g:{\mathbb C}^{n}\rightarrow {\mathbb C}.  So why don’t we ever hear about complex bump functions? 

A short story before tossing the corollary at you: bump functions are nice, because sometimes we need to partition up spaces and nicely distribute functions about.  For example, there are things call partitions of unity which allow us to talk about functions on manifolds nicely.  While working through one of my books for diffi-manifolds today, I noticed that while most of the time the book worked in an arbitrary field, one particular theorem only was only stated for the reals.  I wasn’t sure if it was a typo or not, so I attempted to adapt the proof for complex numbers (at least!) but I got to a point where I needed to use the complex equivalent of a bump function.  No matter where I looked, I couldn’t find any mention of them.  And then it hit me.


Corollary.  Suppose that g:{\mathbb C}\rightarrow {\mathbb C} is a bump function as we’ve defined above.  Then g is the zero function.


Proof.  By Liouville’s theorem, since g is bounded, entire, and smooth (which, in this case, means holomorphic as complex functions differentiable once are infinitely differentiable) it is constant.  Because it is zero at least on some non-compact set, it follows that it must be zero everywhere.


So there’s not much bump in bump functions on the complex plane.  That’s kind of sad!  Nonetheless, it’s a good (or at least cute) application of Liouville’s theorem.

The Hilbert Basis theorem is probably one of the easiest-to-state theorems that I know of in commutative algebra.  The last time I posted about it, I really butchered the proof; not that it was long, but it doesn’t really do anything for me.  Reading back on it now, it doesn’t seem at all intuitive to me.  The proof came through a long line of telephoning: the professor was reading from his notes, I was copying from the board, and then I was copying from my notes.  Now that I have a bit more time, I’d like to go through the proof again, but this time I’d like to motivate the theorem and proof.  Not just because the proof is a common proof-type (there are a ton of proofs that go a similar way in the commutative algebra book I’m going through) but because it’s not nearly as difficult as it looks at first glance.

Read the rest of this entry »

Hilbert’s Nullstellensatz.

October 15, 2010

This serious sounding title is apt for this post, because the Nullstellensatz is the big time.  This is one of the "big" results in algebraic geometry.  Before we dive into the theorem, though, let’s motivate this a little bit. 

Read the rest of this entry »

What the Hell is a Module?

October 12, 2010

This post is going to be a gentle introduction to what a module is.  It isn’t hard, but, for me, modules were sort of just “thrown in” with a whole bunch of defining properties and no motivation for why I should care about them.  I’m hoping to motivate them at least a little bit so that you feel more comfortable thinking and working with them!

Read the rest of this entry »

After I learned about the topologist’s sine curve, I started using it almost immediately; it’s a really sweet example of a graph that is connected, but is not path connected or even locally path connected!  Let’s just jump right in and define it.



The equation for the topologist’s sine curve is

f(x) = \sin(\frac{1}{x})

for every x\in (0,1].  We also include the vertical line at x = 0 from -1\leq y \leq 1.  The reason for this is that the closure of the image of f(x) includes it.  This is easy to see if we notice that the curve goes up-and-down very quickly near 0.  It does take a bit of proving, but not much (it suffices to show every point on the vertical line we added is a limit point). 

Now, let’s show a few properties that the topologist’s sine curve has.

Read the rest of this entry »

While not an especially difficult proof, this kind of this is standard when we’re using compact sets.  What we’re going to do is take an open cover, pull it back, use compactness, then push it forward.  Let’s get down to it!


Theorem: If f: X\rightarrow Y is continuous and X is a compact space, then f(X) is also compact.


It suffices to show that for every open cover of f(X) there exists a finite subcover.  Let’s take \{U_{\alpha}\}_{\alpha} is an arbitrary cover of f(X).  Note that \bigcup_{\alpha} f^{-1}(U_{\alpha}) \supseteq X and each f^{-1}(U_{\alpha}) is open since f is continuous.  Since X is compact, we have that \{f^{-1}(U_{1}), f^{-1}(U_{2}), \dots,f^{-1}( U_{n})\} (with potentially reordering or renaming the indices) is a finite cover of our set X.  Therefore, we have that


\{f(f^{-1}(U_{1})), f(f^{-1}(U_{2})), \dots, f(f^{-1}(U_{n}))\} \\ = \{U_{1}, U_{2}, \dots, U_{n}\}


(with equality holding because f is surjective onto its image) is a cover for f(X) and is therefore a finite subcover of our original cover.  This proves that f(X) is compact.  \hfill \Box



This is a nice theorem, because we can push things into spaces that are strange and preserve compactness knowing only that our map is continuous and our original set is compact.