## Ranks, Nullities, and Theorems.

### May 29, 2010

In the last linear algebra post we went over the Cayley-Hamilton theorem, which stated that every matrix satisfied its own characteristic polynomial.  We even used this to prove somethin’ pretty kickin’.  But if you ask someone what their favorite theorem in linear algebra is, they’d probably say the rank-nullity theorem.  This theorem, which states something very nice about vector spaces, is cited much more frequently, and there are a few pretty surprising corollaries.  Let’s dig right in.

Let’s let $M$ be our $m\times n$ matrix.  Let’s just write it out so we can bask in its glory:

$\left(\begin{array}{cccc} a_{11} & a_{12} & a_{13} & \dots \\ a_{21} & a_{22} & a_{23} & \dots \\ \vdots & \vdots & \vdots & \ddots \end{array}\right)$

Now, we can do a lot of things with this matrix.  But, in particular, in algebra, sometimes we want to know every $x$ such that the equation $Mx = 0$ holds.  That is, we want every vector $x$ such that when we multiply the matrix by it, we get the zero matrix.  We call the set of all such vectors the null space or the nullity.

Now, given a matrix, let’s think of each column as a different vector.  The maximum number of linearly independent column vectors (again, if you do not know what this is, look it up!  Linear independence is one of the most basic and important concepts in linear algebra.) is called the column rank.  It is a theorem (and one that I probably won’t prove, but it’s not difficult to do so) that the column rank is the same as if we took the maximum set of row vectors and took the row rankSince it is the case that the row and column rank are the same, we simply call this number the rank

Theorem (Rank-Nullity Theorem): For an $m\times n$ matrix, the nullity plus the rank is equal to the number of columns in the matrix.  In other words, $\mbox{Rank}(M) + \mbox{Nullity}(M) = n$.

The proof isn’t that tricky, so I leave it off here.  But let’s so an example of this.

Example:

$\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \end{array}\right)$

So, we have that there are obviously three independent row ranks (since these are essentially the standard basis vectors for ${\mathbb R}^{5}$, and since the number of columns is 5, we have, by the rank-nullity theorem, that

$3 + \mbox{Nullity(M)} = 5$

or, in other words, the nullity of $M$ is equal to 2.  We have found this without even trying to calculate any element in the null space.  Pretty kickin’.

Another neat way to use the rank-nullity theorem is to use it to show that some set of vectors is linearly dependent.  For example, let’s take

$\left( \begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array} \right)$

which has two column vectors which are obviously linearly dependent.  Notice that if we find some vector such that the product is zero, then our nullity is non-trivial.  Let’s find such an element.  Notice that

$\left(\begin{array}{cc} 1 & 2 \\ -1 & -2 \end{array}\right) \left( \begin{array}{c} 2 \\ -1 \end{array}\right) = 0$

This means, in particular, that the nullity has at least one element, and is not zero.  According to the rank-nullity theorem, the rank can either be 0 or 1.  This means that the maximum set of linearly independent vectors is either 0 or 1.  Since there are two vectors, this means that they are not linearly independent.  This case was relatively obvious, but there are cases when it is not so obvious.

We’ll get into a few more applications of rank-nullity later, and we will prove something much more general called the splitting lemma, of which the rank-nullity theorem is an immediate consequence.