This is a really sweet deal.  In particular, if we know that our matrix is orthogonal, we can cut down on time finding the inverse significantly.  Combined with the spectral theorem (which states that if the matrix is symmetric, there is an orthogonal matrix S such that S^{-1}AS is a diagonal matrix with entries the eigenvalues) this gives us a tool for finding diagnalizations of matrices. 

Read the rest of this entry »

Advertisements

Linear Algebra is strange.  On the surface, we have a ton of tricks that we can apply to things to make calculations nicer (diagonalizing matrices, finding orthonormal bases,…) but deeper down a lot of things connect to one-another in really unexpected ways — to me, anyway!

Here’s the problem.  We have an n\times n matrix called M, and we have n-1 eigenvalues.  We have ALMOST every eigenvalue, but we’re just missing one.  What can we do about this?

Read the rest of this entry »

Triangulating a Surface.

November 17, 2010

In some cases, we’d like to be able to break down a surface nicely into triangles or things which look like triangles.  There’s a (strong!) theorem which states that every compact surface has a finite triangulation and every surface has a (potentially infinite) triangulation.  We’ll talk about how to triangulate a surface below.

(Note: this used to be a part of the Homology Primer, but I decided against using triangulations to talk about homology.  Nonetheless, triangulation is a topic which comes up in topology so I decided to keep this post up as a reference.)

 

Read the rest of this entry »

Here is what I came up with for the last post on the Plus One game, and a new game on graphs.

Read the rest of this entry »

Algorithms: Plus One Game.

November 12, 2010

I woke up today with a game stuck in my head.

 

The Game: Player A picks a number from 1 to 10.  Player B guesses a number.  If Player B’s guess matches Player A’s number, then Player B wins.  If not, then Player A adds 1 to his number and the game continues.

 

This is not a difficult game to play.  The interesting thing here is that even though there is a winning strategy (Player B simply guesses “10” every time; eventually he will win.) the game could theoretically go on forever — say Player A picked “2” but Player B always guesses “1.”

The question is, how quick can we guarantee a win for Player B?

 

The first solution I thought of was the following: have Player B guess “5” for five turns.  If he doesn’t get it after five turns, then have him guess “15” for the next five turns.  After a bit of thinking, though, this is not actually any better than Player B just guessing “10” every time.

 

First Question: Is this the best we can do?

Now let’s make the game a bit more interesting, because (at this point) the game has a max value and a winning strategy for player B which is obvious.

 

The (Harder) Game: Player A guesses any natural number.  Player B guesses a natural number, and if it matches Player A’s, then player B wins.  If not, Player A adds 1 to his number and the game continues.

 

This game has a similar feel, but an infinite twist.  There is no longer a maximum value, so our original winning strategy does not work.  Nonetheless, is there a finite or countably infinite “optimal” strategy for player B?

 

If we were able to prove that the average number of turns for each finite game (the first game, and ones with a maximum value M) is \displaystyle \frac{M-1}{2}, then it would be: just take a limit.  I have a feeling that this first game can be proved to have such an optimal strategy using some kind of discrete math proof, but it’s been a while.  Anyone want to take a swing at this?

 

(Note: Brooke came up with a winning strategy for player B for the infinite case, and the solution is posted on the next post.  It turns out that even in the infinite case this is not a very interesting game and there is actually a finite winning strategy for player B.  It’s probably exactly what you think it is: go up in multiples of 2’s starting at the beginning.)

Introduction to Simplices.

November 11, 2010

There are a number of ways in topology to make shapes.  We can use the euclidean plane and make them out of equations.  For example, we can make this torus:

image

by plotting the equation \displaystyle c - (\sqrt{x^{2} + y^{2}}) + z^{z} = a^{2} in x, y and z.

Another way we could think of the torus is taking one circle centered at some non-origin point on the xy-axis and rotating it about the x-axis, say.  You could think of this as taking one of those bubble wands, holding it at arms length, and spinning around in a circle.  If the bubble didn’t pop, you’d make a torus bubble around you.  Wouldn’t that be cool?

Read the rest of this entry »

Let me note two things here — one is a mathematical point, one is a technical point. 

First, math: the type of homology I will be introducing here will be cell homology, because I think that it’s the best way for someone to actually get their hands dirty and compute homology groups of spaces.  This is not too much of a loss of generality, since in nice spaces (eg, finite CW complexes) this is the same as most of the other homology theories. 

Now, a technical note.  I am now a beginner user of the bamboo pen tablet, which, so far, is fantastic.  This means that many of my new pictures (whenever possible) will be hand-drawn.  Note that when precision counts, I will continue to use mathematica, but generally drawing things in mathematica is a huge pain past just graphing equations. 

Because my drawing is terrible, in general, if you have any questions about what the pictures mean, please comment and I will try to elaborate.  What seems obvious to me is not necessarily obvious to all of you, so telling me that my drawing of a hexagon looks like a crying cat will help me teach better.

Now onwards to cells!

In another post, I noted something a bit strange at first reading: some authors use the word holomorphic to describe a power series expansion and reserve analytic for complex differentiable while other authors swap those terms.  I then noted that “this doesn’t matter.”  Well, why not?  I mean, definitions are pretty important in mathematics!  My reasoning is: these are really the same thing.  If f is holomorphic then it is also analytic, and vice versa.  I’ve been putting off doing this proof for way too long, so let’s just get it over with.  It’s not hard, it’s just an analysis proof, which means that it’s extremely easy to describe (“take a little ball and do something in it”) but extremely tedious to work out (“take an epsilon such that this epsilon is less than the sum of the minimum of the supremums of…”) but I’m going to try to have a complete proof and motivate every step.

After this, I’ll give a short proof that if a function is complex differentiable (holomorpic, to me.) once, then it is complex differentiable infinitely many times.  It’s not a direct corollary, but it’s a nice fact to know.

Read the rest of this entry »

image

\mbox{\tiny Two functions (solid blue and red) make the dotted purple function when multiplied!}

There’s always a few calculus students who make the error of trying to take the derivative of fg and get f'g'.  Of course, we know that this is not true in general, and the product rule for derivatives is as follows:

 

Theorem (Product Rule).  If f, g are differentiable, then (fg)' = f'g + g'f

 

Given this formula, it’s a nice exercise for students to find out for which functions it is true that (fg)' = f'g'

Read the rest of this entry »

I just used this counter-example, so I felt like I should share it with all of you guys.

The particular point topology is defined in the following way: given some space X, we let p\in X be a distinguished (or particular) point.  It can be any point, really.  Then we let a set be open if it is the empty set, or if it contains p.  Convince yourself that this is, in fact, a topology by going over the definition of a topology. 

Read the rest of this entry »