Cute Proofs: The Product Rule for Derivatives.

November 3, 2010

image

\mbox{\tiny Two functions (solid blue and red) make the dotted purple function when multiplied!}

There’s always a few calculus students who make the error of trying to take the derivative of fg and get f'g'.  Of course, we know that this is not true in general, and the product rule for derivatives is as follows:

 

Theorem (Product Rule).  If f, g are differentiable, then (fg)' = f'g + g'f

 

Given this formula, it’s a nice exercise for students to find out for which functions it is true that (fg)' = f'g'

Anyway, the proof of this theorem is not too difficult, even for calculus students, so I write it out for anyone interested.  This proof is really a "follow your nose" one with only one tricky part.  Let’s do it!

 

Proof.  We want to find (fg)', so let’s set up the normal difference quotient. 

(fg)'(x) = \lim_{h\rightarrow 0}\frac{f(x + h)g(x+h) - f(x)g(x)}{h}

Now the tricky part: we add and subtract f(x)g(x) on the top.  This is really adding 0, since we’re going to add and subtract it, so this does not change our limit at all.  Thus,

(fg)'(x) = \lim_{h\rightarrow 0}\frac{f(x + h)g(x+h) +f(x)g(x) - f(x)g(x)- f(x)g(x)}{h}

= \lim_{h\rightarrow 0}\frac{f(x)[g(x+h) - g(x)] + g(x+h)[f(x+h)- f(x)]}{h}

= \left[ \lim_{h\rightarrow 0} f(x)\right]\lim_{h\rightarrow 0}\frac{[g(x+h) - g(x)]}{h}

+ \left[\lim_{h\rightarrow 0}g(x+h)\right]\frac{[f(x+h)- f(x)]}{h}

We can split up these limits because we know the limit of f exists and we know the derivative of g exists, and so if the limit of two functions exist and they are multiplied, the product of their limits is the limit of their products.  Anyway, this gives us

= f(x)g'(x) + f'(x)g(x)

which is exactly what we wanted to show.  \Box

 

This proof is pretty cute, I’m not gonna lie, and the only two tough parts were the product of two limits part and the initial adding and subtracting a common term part.

 

Aren’t you Bored to Death of that Proof, though?

While looking for a decent proof of this, I stumbled upon a really sweet exercise that said, "Use logarithms to prove a version of the product rule."  I’d never seen this done before, so I thought I’d share it.  In this case, we can only have that our functions are positive, which sort of restricts us a bit, but we note that since pretty much every function we care about in calculus can be written as the difference of two positive functions, and since f,g have to be differentiable and therefore continuous for the product rule to work in general, this isn’t so bad of a restriction.

Proof.  Suppose that h(x) = f(x)g(x).  This is equivalent to saying

\ln(h(x)) = \ln(f(x)g(x)) = \ln(f(x)) + \ln(g(x))

by the property of logs.  Now, take the derivative of both sides. 

\dfrac{1}{h(x)}\dfrac{dh}{dx} = \dfrac{1}{f(x)}\dfrac{df}{dx} + \dfrac{1}{g(x)}\dfrac{dg}{dx}

Now we multiply both sides by h = fg, and we obtain

\dfrac{dh}{dx} = g(x)\dfrac{df}{dx} + f(x)\dfrac{dg}{dx}

which is exactly the product rule.  \Box

Advertisements

2 Responses to “Cute Proofs: The Product Rule for Derivatives.”

  1. loiosu said

    is interesting these but

  2. loiosu said

    here are possible the following forms Consider a function f(x) over an interval [x0, x1] as shown in Figure 3.3-1. The first degree Lagrange polynomial approximating f(x) is given by
    The slope of the function at x0 is then

    (x0) = (x0) + (x0) = [ f(x1)  f(x0)] + (x0)

    Forward difference approximation is obtained when the slope of the interpolating polynomial estimates the derivative of the function at x0 as shown in Figure 3.3-2. In term of the forward difference operator 

    (x0) = + (x0)

    where f(xi) = f(xi+1)  f(xi).

    Figure 3.3-2 Derivatives of the function at x0 and at x1.

    The error for the derivative can be estimated by taking derivative of the error

    E1(x) = ()

    (x0) = [ (x  x0)(x  x1) ()

    (x0) = () [(x  x0)(x  x1)

    (x0) = () [(x  x1) + (x  x0) =  h () = O(h)

    Backward difference approximation is obtained when the slope of the interpolating polynomial estimates the derivative of the function at x1 as shown in Figure 3.3-2.

    (x1) = (x1) + (x1) = [ f(x1)  f(x0)] + (x1)

    The error term has the form

    (x1) = () [(x  x1) + (x  x0) = h () = O(h)

    The error in the backward difference approximation, while having the same form as that in the forward difference approximation, has a different sign. In term of the backward difference operator 
    Figure 3.3-3 Derivative of the function at x1 is estimated by a second-degree polynomial.

    Let h = x1  x0 = x2  x1, the three points interpolating polynomial over this interval is

    P2(x) = L2,0(x) f(x0) + L2,1(x)f(x1) + L2,2(x)f(x2)

    P2(x) = f(x0) + f(x1) + f(x2)

    P2(x) = [(x  x1)(x  x2)f(x0)  2(x  x0)(x  x2)f(x1) + (x  x0)(x  x1) f(x2)]

    The function f(x) can be expressed in terms of its approximating polynomial with an error as

    f(x) = P2(x) + E2(x)

    The slope of the function at x1 is then

    (xi) = + O(h)

    where f(xi) = f(xi)  f(xi-1)

    Central difference approximation is obtained when the slope of the interpolating polynomial estimates the derivative of the function at the midpoint x1 as shown in Figure 3.3-3.

    P1(x) = f(x0) + f(x1)

    Let h = (x1  x0), then

    P1(x) = [(x1  x) f(x0) + (x  x0) f(x1)]

    Figure 3.3-1 Approximating by first-degree polynomial with error E1(x).

    The function f(x) can be expressed in terms of its approximating polynomial with an error as

    f(x) = P1(x) + E1(x) (x1) = (x1) + (x1) = [ f(x2)  f(x0)] + (x1)

    The error term in this difference approximation is

    (x1) = [ (x  x0)(x  x1)(x  x2) ()
    (x1) =  h2 () = O(h2)

    The three points approximation is accurate to O(h2). In term of the central difference operator 

    (x1) = [f(x1 + ) + f(x1  )] + O(h2)

    where the central difference operator  is defined as

    f(xi) = f(xi + )  (xi  )

    f(x1 + ) + f(x1  ) = f(x1 + h)  f(x1) + f(x1)  f(x1  h) = f(x2)  f(x0)

    Finite difference approximation of higher order derivatives can also be obtained(x) = P2(x) + E2(x)

    The second derivative of the function at x1 is then

    (x1) = (x1) + (x1)

    After some algebra

    (x1) = [f(x0)  2f(x1) + f(x2)] + (x1)

    The error term can be evaluated to yield

    (x1) =  h2 () = O(h2)

    The central difference can also be written in terms of the central difference operator 

    (x1) = + O(h2)
    where

    2f(x1) = [f(x1)] = [ f(x1 + )  f(x1  )] = f(x2)  f(x1)  [f(x1)  f(x0)]
    So we have INTEGRATION , Prof. Orasanu

    And appear some

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: