## Ranks, Nullities, and Theorems.

### May 29, 2010

In the last linear algebra post we went over the Cayley-Hamilton theorem, which stated that every matrix satisfied its own characteristic polynomial. We even used this to prove somethin’ pretty kickin’. But if you ask someone what their favorite theorem in linear algebra is, they’d probably say the **rank-nullity theorem**. This theorem, which states something very nice about vector spaces, is cited much more frequently, and there are a few pretty surprising corollaries. Let’s dig right in.

Let’s let be our matrix. Let’s just write it out so we can bask in its glory:

Now, we can do a lot of things with this matrix. But, in particular, in algebra, sometimes we want to know every such that the equation holds. That is, we want every vector such that when we multiply the matrix by it, we get the zero matrix. We call the set of all such vectors the **null space** or the **nullity**.

Now, given a matrix, let’s think of each column as a different vector. The maximum number of linearly independent column vectors (again, if you do not know what this is, look it up! Linear independence is one of the most basic and important concepts in linear algebra.) is called the **column rank**. It is a theorem (and one that I probably won’t prove, but it’s not difficult to do so) that the column rank is the same as if we took the maximum set of *row vectors *and took the **row rank**. *Since it is the case that the row and column rank are the same, we simply call this number the rank. *

**Theorem (Rank-Nullity Theorem): **For an matrix, the nullity plus the rank is equal to the number of columns in the matrix. In other words, .

The proof isn’t that tricky, so I leave it off here. But let’s so an example of this.

**Example:**

So, we have that there are obviously three independent row ranks (since these are essentially the standard basis vectors for , and since the number of columns is 5, we have, by the rank-nullity theorem, that

or, in other words, the nullity of is equal to 2. We have found this without even trying to calculate any element in the null space. Pretty kickin’.

Another neat way to use the rank-nullity theorem is to use it to show that some set of vectors is linearly dependent. For example, let’s take

which has two column vectors which are obviously linearly dependent. Notice that if we find some vector such that the product is zero, then our nullity is non-trivial. Let’s find such an element. Notice that

This means, in particular, that the nullity has at least one element, and is not zero. According to the rank-nullity theorem, the rank can either be 0 or 1. This means that the maximum set of linearly independent vectors is either 0 or 1. Since there are two vectors, this means that they are not linearly independent. This case was relatively obvious, but there are cases when it is not so obvious.

We’ll get into a few more applications of rank-nullity later, and we will prove something much more general called the *splitting lemma*, of which the rank-nullity theorem is an immediate consequence.

The firm refused to liquidation 125 deny reports that it is

no longer in demand. You always want to ensure you are liquidation 125

buying quality items. He believes the indicated

rate of growth merits a price/earnings ratio in liquidation of 10 times earnings,

for an indicated liquidation value of its huge inventory of gold and silver.