## The Inverse of an Orthogonal Matrix is its Transpose.

### November 30, 2010

This is a really sweet deal. In particular, if we know that our matrix is orthogonal, we can cut down on time finding the inverse *significantly*. Combined with the spectral theorem (which states that if the matrix is symmetric, there is an *orthogonal* matrix such that is a diagonal matrix with entries the eigenvalues) this gives us a tool for finding diagnalizations of matrices.

Linear Algebra is strange. On the surface, we have a ton of tricks that we can apply to things to make calculations nicer (diagonalizing matrices, finding orthonormal bases,…) but deeper down a lot of things connect to one-another in really unexpected ways — to me, anyway!

Here’s the problem. We have an matrix called , and we have eigenvalues. We have ALMOST every eigenvalue, but we’re just missing one. What can we do about this?

## Triangulating a Surface.

### November 17, 2010

In some cases, we’d like to be able to break down a surface nicely into triangles or things which look like triangles. There’s a (strong!) theorem which states that every compact surface has a finite triangulation and every surface has a (potentially infinite) triangulation. We’ll talk about how to triangulate a surface below.

(**Note: **this used to be a part of the Homology Primer, but I decided against using triangulations to talk about homology. Nonetheless, triangulation is a topic which comes up in topology so I decided to keep this post up as a reference.)

## Algorithms: Plus One Game Ideas and the Graphed Plus One Game.

### November 13, 2010

Here is what I came up with for the last post on the Plus One game, and a new game on graphs.

## Algorithms: Plus One Game.

### November 12, 2010

I woke up today with a game stuck in my head.

**The Game: **Player A picks a number from 1 to 10. Player B guesses a number. If Player B’s guess matches Player A’s number, then Player B wins. If not, then Player A adds 1 to his number and the game continues.

This is not a difficult game to play. The interesting thing here is that even though there is a winning strategy (Player B simply guesses “10” every time; eventually he will win.) the game could theoretically go on forever — say Player A picked “2” but Player B always guesses “1.”

The question is, *how quick can we guarantee a win for Player B?*

The first solution I thought of was the following: have Player B guess “5” for five turns. If he doesn’t get it after five turns, then have him guess “15” for the next five turns. After a bit of thinking, though, this is not actually any better than Player B just guessing “10” every time.

**First Question: Is this the best we can do?**

** **

Now let’s make the game a bit more interesting, because (at this point) the game has a max value and a winning strategy for player B which is obvious.

**The (Harder) Game: **Player A guesses any natural number. Player B guesses a natural number, and if it matches Player A’s, then player B wins. If not, Player A adds 1 to his number and the game continues.

This game has a similar feel, but an infinite twist. There is no longer a maximum value, so our original winning strategy does not work. Nonetheless, is there a finite or countably infinite “optimal” strategy for player B?

If we were able to prove that the average number of turns for each finite game (the first game, and ones with a maximum value ) is , then it would be: just take a limit. I have a feeling that this first game can be proved to have such an optimal strategy using some kind of discrete math proof, but it’s been a while. Anyone want to take a swing at this?

(**Note: **Brooke came up with a winning strategy for player B for the infinite case, and the solution is posted on the next post. It turns out that even in the infinite case this is not a very interesting game and there is actually a *finite* winning strategy for player B. It’s probably exactly what you think it is: go up in multiples of 2’s starting at the beginning.)

## Introduction to Simplices.

### November 11, 2010

There are a number of ways in topology to make shapes. We can use the euclidean plane and make them out of equations. For example, we can make this torus:

by plotting the equation in and .

Another way we could think of the torus is taking one circle centered at some non-origin point on the -axis and rotating it about the -axis, say. You could think of this as taking one of those bubble wands, holding it at arms length, and spinning around in a circle. If the bubble didn’t pop, you’d make a torus bubble around you. Wouldn’t that be cool?

## Homology Primer 0: Introduction to the Primer.

### November 11, 2010

Let me note two things here — one is a mathematical point, one is a technical point.

First, math: the type of homology I will be introducing here will be cell homology, because I think that it’s the best way for someone to actually get their hands dirty and compute homology groups of spaces. This is not too much of a loss of generality, since in nice spaces (eg, finite CW complexes) this is the same as most of the other homology theories.

Now, a technical note. I am now a beginner user of the bamboo pen tablet, which, so far, is fantastic. This means that many of my new pictures (whenever possible) will be hand-drawn. Note that when precision counts, I will continue to use mathematica, but generally drawing things in mathematica is a huge pain past just graphing equations.

Because my drawing is terrible, in general, if you have any questions about what the pictures mean, please comment and I will try to elaborate. What seems obvious to me is not necessarily obvious to all of you, so telling me that my drawing of a hexagon looks like a crying cat will help me teach better.

Now onwards to cells!

## You say Holomorphic, I say Analytic.

### November 9, 2010

In another post, I noted something a bit strange at first reading: some authors use the word holomorphic to describe a power series expansion and reserve analytic for complex differentiable while other authors swap those terms. I then noted that “this doesn’t matter.” Well, why not? I mean, definitions are pretty important in mathematics! My reasoning is: these are really the same thing. If is holomorphic then it is also analytic, and vice versa. I’ve been putting off doing this proof for way too long, so let’s just get it over with. It’s not hard, it’s just an *analysis proof*, which means that it’s extremely easy to describe (“take a little ball and do something in it”) but extremely tedious to work out (“take an epsilon such that this epsilon is less than the sum of the minimum of the supremums of…”) but I’m going to try to have a complete proof and motivate every step.

After this, I’ll give a short proof that if a function is complex differentiable (holomorpic, to me.) once, then it is complex differentiable infinitely many times. It’s not a direct corollary, but it’s a nice fact to know.

## Cute Proofs: The Product Rule for Derivatives.

### November 3, 2010

There’s always a few calculus students who make the error of trying to take the derivative of and get . Of course, we know that this is not true in general, and the product rule for derivatives is as follows:

**Theorem (Product Rule)**. If are differentiable, then .

Given this formula, it’s a nice exercise for students to find out for which functions it is true that .

## Counter-Examples: The Particular Point Topology.

### November 3, 2010

I just used this counter-example, so I felt like I should share it with all of you guys.

The particular point topology is defined in the following way: given some space , we let be a distinguished (or particular) point. It can be any point, really. Then we let a set be open if it is the empty set, or if it contains . Convince yourself that this is, in fact, a topology by going over the definition of a topology.