Stochastic Matrices: Kind of like Sudoku, but kind of not.

July 22, 2010

At some point, I mentioned that if V is a complex vector space and T is a linear map such that T:V\rightarrow V, then T has an eigenvalue.  We also noted that if V is an odd dimensional real space, then T has an eigenvalue.  Stochastic matrices, which are in probability theory and a whole bunch of other places, have this same sort of nice property: if M(T) happens to be stochastic, then there exists an eigenvalue!

But what the heck is a (right) stochastic matrix?  Well, it’s not difficult to explain: it is a matrix such that in each of the rows, all of the numbers in that row add up to 1.  So, for example, look at the matrix

\left(\begin{array}{ccc} 1 & 0 & 0 \\ \frac{2}{3} & -\frac{1}{3} & \frac{2}{3} \\ \frac{1}{2} & 0 & \frac{1}{2} \end{array}\right)

Notice that the numbers in the first row add up to one and, similarly, the second and third rows also add up to 1 each.  But

\left(\begin{array}{ccc} 1 & 2 & 0 \\ \frac{2}{3} & -\frac{1}{3} & \frac{2}{3} \\ \frac{1}{2} & 0 & \frac{1}{2} \end{array}\right)

is not a stochastic matrix, because the first row does not add up to 1 (as, obviously, 1 + 2 + 0 = 3).  Sad.  But, now, here’s the formal theorem.

Theorem: If V is a nontrivial finite-dimensional vector space and T:V\rightarrow V is a linear map such that M(T) is a stochastic matrix, then T has an eigenvalue.

Proof. Okay, let’s find it.  Let’s write out M(T):

\left( \begin{array}{cccc} a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ a_{2,1} & a_{2,2} & \dots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \dots & a_{n,n}\end{array}\right)

where \sum_{j = 1}^{n} a_{i,j} = 1 for all i.  Now, let’s find an eigenvalue by finding an eigenvector.  Think about this for a second.  We want

M(T)v = \lambda v

and we only know one thing about M(T), namely that it’s stochastic.  So, if you haven’t figured it out yet, let’s let v be the vertical vector that’s made up of all 1’s.  So,

\left(\begin{array}{cccc} a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ a_{2,1} & a_{2,2} & \dots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \dots & a_{n,n}\end{array}\right)\left(\begin{array}{c} 1 \\ 1 \\ \vdots \\ 1 \end{array}\right)

\displaystyle = \left(\begin{array}{c} \sum_{j = 1}^{n} a_{1,j} \\ \sum_{j = 1}^{n} a_{2,j} \\ \vdots \\ \sum_{j = 1}^{n} a_{n,j} \end{array}\right) = \left(\begin{array}{c} 1 \\ 1 \\ \vdots \\ 1 \end{array}\right)

which is nice.  Notice, then, that \lambda = 1 corresponds to the eigenvector v with all 1’s that we just described.  \Box.

Nice.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: