In the last linear algebra post we went over the Cayley-Hamilton theorem, which stated that every matrix satisfied its own characteristic polynomial.  We even used this to prove somethin’ pretty kickin’.  But if you ask someone what their favorite theorem in linear algebra is, they’d probably say the rank-nullity theorem.  This theorem, which states something very nice about vector spaces, is cited much more frequently, and there are a few pretty surprising corollaries.  Let’s dig right in.

Read the rest of this entry »

Just a little post.  Let’s suppose that we have some vector field g, and let’s say that, because we’re so clever, we find that there is some function f such that \nabla f = g.  How lucky!  Then we can do something really cute.

Read the rest of this entry »

This post definitely could win an award for “most boring title ever,” but this is kind of a neat idea, and it’s a sweet application of the cayley-hamilton theorem.

So, here’s the problem.  You have some little matrix M and you want to calculate M^{43}, but you don’t have your trusty graphing calculator.  Yeah, this could get really, really annoying.  Especially if M is more than a 2\times 2 matrix.  Ugh.

Read the rest of this entry »

Eigen-things.

May 25, 2010

This is going to be a short post about how to find an eigenvalue and its corresponding eigenvectors.  Okay, for a matrix A and a vector x, we want to find a \lambda such that Av = v\lambda.  This means that when we multiply v by some matrix, we get a scalar multiple of that same matrix!  Pretty sweet.  How do we find such a thing?

Read the rest of this entry »

Every so often, when I’m tutoring someone, a problem will require the student to take an inverse of a 3×3 matrix, or the determinant of a 4×4 matrix.  The student replies, "Oh, this is easy…there’s a command for it on my calculator."  At that point, I think to myself, "yes, that’s certainly nice…but what if we didn’t have a calculator?"  And furthermore, how am I going to check to see if the answer is correct?  I rarely bring a graphing calculator with me.  What a terrible situation!

Read the rest of this entry »

When we do integration over surfaces, we want to “cut up our surfaces” into little boxes.  This makes sense, right?  Think about an orange.

orange

Now, if we want to find the surface area of this orange, we have to measure “little bits of area” of it.  In order to do that, we need to peel the orange.   Once we have the peel off, let’s put it down on a table and cut it up into little one inch by one inch squares.  Then we’ll have a little bit left over, but we’ll be able to estimate this reasonably.  We can cut it up into smaller pieces and get a better estimate.  We can keep cutting and cutting smaller and smaller pieces and get better and better estimates, and this is essentially the algorithm to find the exact surface area.

But what if we couldn’t peel this orange?  What if we needed to figure out what the surface area was while the skin was still on?  We could make little “boxes” on the orange and just approximate the area of those boxes, and this is generally how we do it for real surfaces.

Read the rest of this entry »

I was all set to post about lagrange multipliers when I realized that they are pretty dull.  Since this is my blog, after all, I thought I’d take a slight detour and talk about why we’d ever care about taking the gradient of functions, besides for all of that max-min stuff I talked about before.  We haven’t formally talked about line integrals yet, but they are not horrible creatures: if you have not seen them yet, just think about integrals in one dimension.

As it turns out, and you can probably imagine why, the gradient acts as a kind of “universal derivative” in multivariable calculus.  Whereas we had, in single variable calculus,

\displaystyle \int_{a}^{b} f'(x)dx = f(b) - f(a)

we don’t have a nice “anti-derivative” in vector calculus such that this equality should hold!  After all, we could ask “the anti-derivative…in what direction?”

Read the rest of this entry »

Last time, we noticed that when \nabla f(x,y) = 0 at some point (x,y) we have that one of two things happens: either there’s a max or min, or there’s a saddle point.  So, I guess, that’s kind of three things.  Okay, three things can happen: max, min, or saddle point.  Let’s take a look at these.

Our first function is the function f(x,y) = x^2 + y^2 +xy.

graph2a

It kind of looks like a big hammock!  Okay, now, let’s note that \nabla f(x,y) = (2x + y, 2y + x), and therefore is equal to zero exactly when its components are: we, therefore, have that at the point (0,0), the gradient is zero.  This means we have to have something happening at this point!  Can you see what it is?  That’s right, kiddo, it’s a minimum.

Read the rest of this entry »

Last time, we talked about the gradient.  Let’s remind ourselves of what this means: for an arbitrary function (let’s say, of two variables) f(x,y) we have the associated vector field \nabla f(x,y) = (\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y})

Now, we need to introduce the dot product.  While the dot product of two things is relatively easy to calculate, it’s a bit tricky to figure out what it "means" to take the dot product of two vectors.  Let’s define this first:

A\cdot B = |A||B|\cos(\theta)

where |A| is the length of the vector A (given as its magnitude, calculated in the standard way: the square root of the sum of the square of its components!) and \theta is the angle between the two vectors.  Let’s note here that the dot product takes two vectors and spits out a scalar.

Read the rest of this entry »

Last time we talked about derivatives on surfaces.  We noted that there were two main derivatives that we care about, \frac{\partial}{\partial x} and \frac{\partial}{\partial y} called the partial derivative in the x-direction and the partial derivative in the y-direction.  To calculate the partial in the x-direction for some function of x and y, we simply make y a constant and derive with respect to x.

“But!…” you might begin, “But in calculus, we had a function’s graph and we had a graph of the derivative!  Both were usually functions!  Don’t we have anything like that for surfaces?”

Well, yes.  We do.  Sort of.  Because, at every point, we have an x and y partial derivative, we can assign to each point (x_0, y_0) of our surface f(x,y) an ordered pair of partial derivatives.  Namely, at the point (x_0, y_0), we have the ordered pair of partial derivatives (\frac{\partial f}{\partial x}(x_0,y_0), \frac{\partial f}{\partial y}(x_0,y_0)).  Now, a relatively nice way to represent this kind of thing is as a vector field.  A vector field is sort of like a regular graph, but for each point on the graph we assign a vector.

Read the rest of this entry »