Okay, so, we know (or you should know!) what cross and dot products are for Euclidean spaces.  We use them all the time!  On the daily!  So, I’m sure you’ve seen the equalities

a\cdot b = \|a\|\|b\|\cos(\theta)

\|a\times b\| = \|a\|\|b\|\sin(\theta)

where \theta is the angle between the vectors a and b.  Where do these equalities come from?  Let’s find out.

Read the rest of this entry »

This is going to be a short post about basic differentiating and integrating of multivariable functions.  Remember, just because something is polar doesn’t mean that it’s not fun! But, if something is fun, then it rarely is polar.  Just saying.

Read the rest of this entry »

Just a little post.  Let’s suppose that we have some vector field g, and let’s say that, because we’re so clever, we find that there is some function f such that \nabla f = g.  How lucky!  Then we can do something really cute.

Read the rest of this entry »

When we do integration over surfaces, we want to “cut up our surfaces” into little boxes.  This makes sense, right?  Think about an orange.

orange

Now, if we want to find the surface area of this orange, we have to measure “little bits of area” of it.  In order to do that, we need to peel the orange.   Once we have the peel off, let’s put it down on a table and cut it up into little one inch by one inch squares.  Then we’ll have a little bit left over, but we’ll be able to estimate this reasonably.  We can cut it up into smaller pieces and get a better estimate.  We can keep cutting and cutting smaller and smaller pieces and get better and better estimates, and this is essentially the algorithm to find the exact surface area.

But what if we couldn’t peel this orange?  What if we needed to figure out what the surface area was while the skin was still on?  We could make little “boxes” on the orange and just approximate the area of those boxes, and this is generally how we do it for real surfaces.

Read the rest of this entry »

I was all set to post about lagrange multipliers when I realized that they are pretty dull.  Since this is my blog, after all, I thought I’d take a slight detour and talk about why we’d ever care about taking the gradient of functions, besides for all of that max-min stuff I talked about before.  We haven’t formally talked about line integrals yet, but they are not horrible creatures: if you have not seen them yet, just think about integrals in one dimension.

As it turns out, and you can probably imagine why, the gradient acts as a kind of “universal derivative” in multivariable calculus.  Whereas we had, in single variable calculus,

\displaystyle \int_{a}^{b} f'(x)dx = f(b) - f(a)

we don’t have a nice “anti-derivative” in vector calculus such that this equality should hold!  After all, we could ask “the anti-derivative…in what direction?”

Read the rest of this entry »

Last time, we noticed that when \nabla f(x,y) = 0 at some point (x,y) we have that one of two things happens: either there’s a max or min, or there’s a saddle point.  So, I guess, that’s kind of three things.  Okay, three things can happen: max, min, or saddle point.  Let’s take a look at these.

Our first function is the function f(x,y) = x^2 + y^2 +xy.

graph2a

It kind of looks like a big hammock!  Okay, now, let’s note that \nabla f(x,y) = (2x + y, 2y + x), and therefore is equal to zero exactly when its components are: we, therefore, have that at the point (0,0), the gradient is zero.  This means we have to have something happening at this point!  Can you see what it is?  That’s right, kiddo, it’s a minimum.

Read the rest of this entry »

Last time, we talked about the gradient.  Let’s remind ourselves of what this means: for an arbitrary function (let’s say, of two variables) f(x,y) we have the associated vector field \nabla f(x,y) = (\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y})

Now, we need to introduce the dot product.  While the dot product of two things is relatively easy to calculate, it’s a bit tricky to figure out what it "means" to take the dot product of two vectors.  Let’s define this first:

A\cdot B = |A||B|\cos(\theta)

where |A| is the length of the vector A (given as its magnitude, calculated in the standard way: the square root of the sum of the square of its components!) and \theta is the angle between the two vectors.  Let’s note here that the dot product takes two vectors and spits out a scalar.

Read the rest of this entry »

Last time we talked about derivatives on surfaces.  We noted that there were two main derivatives that we care about, \frac{\partial}{\partial x} and \frac{\partial}{\partial y} called the partial derivative in the x-direction and the partial derivative in the y-direction.  To calculate the partial in the x-direction for some function of x and y, we simply make y a constant and derive with respect to x.

“But!…” you might begin, “But in calculus, we had a function’s graph and we had a graph of the derivative!  Both were usually functions!  Don’t we have anything like that for surfaces?”

Well, yes.  We do.  Sort of.  Because, at every point, we have an x and y partial derivative, we can assign to each point (x_0, y_0) of our surface f(x,y) an ordered pair of partial derivatives.  Namely, at the point (x_0, y_0), we have the ordered pair of partial derivatives (\frac{\partial f}{\partial x}(x_0,y_0), \frac{\partial f}{\partial y}(x_0,y_0)).  Now, a relatively nice way to represent this kind of thing is as a vector field.  A vector field is sort of like a regular graph, but for each point on the graph we assign a vector.

Read the rest of this entry »

Partial Derivatives.

May 15, 2010

When we have a regular ol’ graph on the xy-axis, it’s not that hard to start taking derivatives.  You know how this goes:

  1. Pick a point.
  2. Pick a point close to it.
  3. Make a secant line.
  4. Pick a point closer and make another line.
  5. Eventually, our secant lines will converge to a tangent line at our original point, if the function is sufficiently nice.

In general, then, if a derivative exists, it will be given by

\displaystyle f'(x) = \lim_{(\delta x\rightarrow 0)} \frac{f(x + \delta x)-f(x)}{\delta x}

But this formula is astoundingly beautiful if we have that our function is a polynomial.  Suppose that f(x) = a_{n}x^n + \dots + a_{0}.  Then what do we get?

\displaystyle f'(x) = \lim_{(\delta x\rightarrow 0)}\frac{a_{n}(x + \delta x)^n + \dots + a_0 -(a_{n}x^n + \dots + a_{0})}{\delta x}

And we can make this a lot nastier by doing a binomial expansion.  But suffice it to note that every term without any \delta x will get canceled out, and every remaining term has at least one \delta x in it.  Factoring this one out and canceling it from the top and the bottom, then taking the limit as \delta x\rightarrow 0 we have that the only things which are left are those terms which had exactly one \delta x in them after we expanded the binomial.  These terms are exactly those of the form

{i \choose i-1}x^{i-1}

for each power.  But, of course,

{i \choose i-1} = i

This gives us exactly these terms:

na_{n}x^{n-1} + (n-1)a_{n-1}x^{n-2} + \dots + 2a_{2}x + a_{1}

which is a really sweet deal.  So, for polynomials, the deal is to just “pull down the power”, “multiply”, and “subtract 1 from the power.”

Read the rest of this entry »