The Inverse of an Orthogonal Matrix is its Transpose.
November 30, 2010
This is a really sweet deal. In particular, if we know that our matrix is orthogonal, we can cut down on time finding the inverse significantly. Combined with the spectral theorem (which states that if the matrix is symmetric, there is an orthogonal matrix such that is a diagonal matrix with entries the eigenvalues) this gives us a tool for finding diagnalizations of matrices.
All I want to do is state and prove this. Before we do, let’s just define the transpose really quickly.
Definition: The transpose of an matrix is the matrix denoted whose -th entry is the -th entry of .
In other words, you flip the indices. Each becomes . Now, this gives us a really nice way to compute the standard dot product.
Claim: For vectors of the same dimension, we have that where the right-hand-term is just matrix multiplication.
Why is this? Well, just think about it for a second and look at this example.
Then we have the dot product is just . But what is ?
and matrix multiplication gives us
same as above. This should give you a clear picture of what is going on here. Now the main theorem.
Theorem: is orthogonal (all of its columns are orthonormal, not just orthogonal), if and only if we have .
Proof. This proof is not as hard as you’d expect, and uses only one main idea to make the proof elegant: the claim above. We first do a little prep work to get down to the "main matrix" and then the if and only if will follow. Let’s let be defined as follows, with each :
Now, note that is equal to almost the same thing, except the columns are now the rows, and we’ve turned them up. Check this. It’s equal to:
Now we just multiply these two together. We obtain, as you can check:
But, because of our claim above, we can simplify this nightmarish mess.
This is the "main matrix" that I was describing above. This matrix is the meat of this proof. Here’s why.
Suppose that is orthogonal. Then for each , and if . Replacing these values in the "main matrix", we get:
Which is the identity matrix. By the uniqueness of inverses for matrices, this means that .
Now suppose the converse is true: that . Well, what does this say? It says that for our "main matrix" that the diagonals are all 1 and the other elements are 0; in other words, for each and if . The latter expression tells us the columns are orthogonal, and the former tells us that these columns are actually orthonormal. Hence is an orthogonal matrix.