The transcendental functions

The simplest differential equation asks for a function which is its own derivative. The constant zero function presents itself as a boring solution, so we refine the question by asking for the function to be non-zero at least somewhere. If f is a solution, then so is x-> k.f(x+p) for any fixed k and p, so let p be somewhere that f isn't zero and use k = 1/f(p) to get a solution which is 1 at zero. This function is known as exp.

It turns out that exp(0)=1, exp'=exp suffice to define exp uniquely; and to guarantee that exp is positive (on the reals). Now, x-> k.exp(x+p) is equal to its derivative for any k, p; and when k is 1/exp(p) it maps 0 to 1, so this function must be equal to exp; i.e. exp(x) = exp(x+p)/exp(p) for any p. Using x=-p we obtain exp(-p) = exp(0)/exp(p) = 1/exp(p), and re-arranging the equation tells us exp(x+p) = exp(x).exp(p). We can now recover our original general solution, f, as f(x+p) = f(p).exp(x), consider y = x+p to discover f(y) = f(p).exp(y-p) = exp(y).f(p)/exp(p), whence f(y)/exp(y) = f(p)/exp(p) is constant, so = f(0)/exp(0) = f(0) and f(x) = f(0).exp(x). Thus every solution to the simplest differential equation is proportional to exp.

Since exp is its own derivative and exp(0) = 1, exp is increasing at 0, whence exp(x) > 1 for x > 0; and the bigger it gets the faster it grows, so it gets bigger and bigger faster and faster as x gets big. Indeed, by use of repeated addition, for any integer n and real x, we can show that exp(n.x) is exp(x)**n and we can take `raising to a power', defined by t**(1+n) = t.t**n, t**0 = 1 so only for integer n, and generalise it to `exponentiation' in which exp(x) = exp(1)**x; exp(1) is conventionally called e.

One can define a function by x-> sum([..., 1, 0]: i-> x**i / i! :) and verify (first, that the sum always converges - it does - next that it is differentiable - it is - so, next and easiest) that it is equal to its derivative: for each i, the derivative of x-> x**i / i! is i.x**(i-1) / i! which is simply x**(i-1) / (i-1)!, the previous term in the sum; for i = 0, the derivative is zero, so the sum of derivatives of terms is equal to the original sum of terms; thus our function is equal to its own derivative; that its value at zero is 1 may readilly be checked, so clearly this `infinite polynomial' is equal to exp.

[Aside: x**i and i! are the i-th power of x and the factorial of i, respectively; they are defined by x**0 = 1 = 0!, x**(1+n) = x.x**n and (1+n)! = (1+n).n! so the i-th term in our series is x/i times its predecessor. Since i grows steadily from each term to the next and x is the same in all terms, x/i decreases steadily; ultimately i will exceed x, after which each term will be smaller than the one before it; once i exceeds 2.x, each term is less than half its predecessor, so the magnitutes of the terms with i > n, for any n > 2.x, are bounded above by abs(x**n / n!) / 2**(i-n), so the sum of terms with i > n is bounded above by abs(x**n / n!), the magnitude of the n-th term. As n increases by 1, this shrinks by a factor of at least two, so the tail of the sum shrinks to zero and the sum converges.]

When exp is considered in the complex plane (where -1 has an imaginary square root, call it j), it turns out to be periodic: its period is purely imaginary, of size 2.π. Thus exp(x+2.π.j) =exp(x) for all x.

Harmonics

The simplest second-order differential equation is just f'' = f but it is easy to see that solutions to f'=f and f'=-f will satisfy this -- i.e., we can reduce this to a first-order differential equation, so it's too easy. The next simplest, f''= -f, may likewise be handled as f'=j.f with j a square root of -1 (which says f is proportional to x-> exp(j.x)): but may equally be analysed without involving imaginary numbers.

Given f''=-f, note that differentiating gives us f'''=-f', so f' is a solution of the same equation as f. Next, consider g = f.f + f'.f' and observe

so g is constant. Given that g is a sum of squares, we can take its square root, k: f and f' are bounded above by k and below by -k. For any given input, x, to f we can now assert that one of the following conditions holds:

f(x) = 0, f'(x) = ±k

with f'(x) = k, this means f(x) is increasing, so for y slightly greater than x, f(y) will be positive while, for y slightly less than x, f(y) will be negative; in both cases, f'(y) will be slightly less than k. Conversely, with f'(x) = -k, f is decreasing through zero in like fashion.

f'(x) = 0, f(x) = ±k

with f(x) = -k, so f''(x) = k, this means f'(x) is increasing through zero; it has just been negative and is about to be positive, so f has just decreased, down to -k, and is about to increase back up from there. Likewise, f(x) = k gives a local maximum.

0 < f(x) < k, 0 < f'(x) < k

f is positive and increasing, so will remain positive; but f'' is negative, so f' is decreasing

0 < f(x) < k, 0 > f'(x) > -k

0 > f(x) > -k, 0 < f'(x) < k

0 > f(x) > -k, 0 > f'(x) > -k

Written by Eddy.
$Id: transcendental.html,v 1.1 2001-10-17 18:00:21 eddy Exp $