]> Logarithm

Logarithm

I shall describe any homoorphism from a multiplication to an addition as logarithmic. I shall begin by discussing a function whose outputs are (when mappings) logarithmic from the multiplication on {positives} to the addition on {reals}, but discussed in a generality that allows their application wherever power's definition can sensibly be applied or extended. The general properties of such mappings from {positives} to {reals} then prepare the ground for the introduction of the natural logarithm, in terms of which all other logarithmic functions may be expressed.

Let log = reverse&on;transpose(power). This is

so log(x, v) = n precisely if power(n, x) = v, and thus power(log(x, v), x) = v for all positive x ≠ 1 and v. (This is xlogx(v) = v in orthodox notation.) As power(0, x) = 1 and power(1, x) = x, log(x) relates 0 to 1 and 1 to x, i.e. log(x, 1) = 0 and log(x, x) = 1, for all x.

As power(x, 1) = 1 for all x, log(1) relates every input power can accept to 1; it has only one right value and it is not a mapping. As power(0, 0) = 1, log(0) does relate 0 to 1 (and can be thought of as mapping 1 to 0); however power(x, 0) = 0 for all positive x, so log(0) relates all positives to 0 (and doesn't relate anything else to anything else); as a result, it isn't a mapping. So, while log(0) and log(1) are well-defined as relations, they're not mappings.

As transpose(power, x) is strictly monotonic (increasing for x > 1, decreasing for 0 < x < 1) for all positive x other than 1, log(x) is a mapping when either 0 < x < 1 or 1 < x. As power(−n, x) = 1/power(n, x) = power(n, 1/x), log(1/x) = −log(x), so everything of interest about log(x) for 0 < x < 1 can be inferred from log(z) for some z > 1. Thus I shall attend primarily to log(x) for x > 1.

Consider log(x, u.v), with power(log(x, u.v), x) = u.v = power(log(x, u), x).power(log(x, v), x); and each (: power(t, x) ←t :) is exponential so this last is power(log(x, u) +log(x, v), x); as long as log(x) is a mapping (i.e. (: power(t, x) ←t :) is monic), this makes log(x, u.v) = log(x, u) +log(x, v), so that each log(x) that's a mapping is a homomorphism from a multiplication to an addition; every mapping that's an output of log is logarithmic (and this, of course, is the reason for its name).

Now, power(log(x, v), x) = v = power(log(y, v), y), so power(log(x, v)/log(y, v), x) = y = power(log(x, y), x) whence log(x, v)/log(y, v) = log(x, y) at least in so far as (: power(u, x) ←u :) is monic, i.e. log(x) is a mapping. This gives us log(x, v) = log(x, y).log(y, v) for all x, y, v; whence log(x) = log(x, y).log(y) as functions, for all x, y. In particular, 1 = log(x, x) = log(x, y).log(y, x), i.e. log(y, x) = 1/log(x, y), whenever log(x) and log(y) are mappings, in particular whenever x and y are positives other than 1. When x has a multiplicative inverse, log(x) relates −1 to 1/x; when log(x) is a mapping, this gives log(1/x, y) = log(1/x, x).log(x, y) = −log(x, y) and, when log(y) is also a mapping, log(y, 1/x) = −log(y, x).

General properties

Consider any logarithmic f; i.e. a homomorphism from a multiplication to an addition, f(x.y) = f(x) +f(y) and f(1) = 0. If we scale f by a constant, k.f(x.y) = k.(f(x) +f(y)) = k.f(x) +k.f(y), at least when our multiplication distributes over our addition (e.g. when f's outputs and k are values of some ringlet), making k.f also logarithmic for any k. When the outputs of f lie in some module over a ringlet, e.g. in a vector space, selecting a single component of it respects addition, so the composite of such a selection after f will be logarithmic when f is; for example, the real (or imaginary) part of a complex-valued logarithmic function.

As long as logarithmic ({reals}: f :) isn't (everywhere) zero, there's some x for which f(x) is non-zero, whence there is some integer n for which 1 < n.f(x) = f(power(n, x)) and so there is some input, u = power(n, x), for which f(u) > 1; and thus, for every natural m, there is some natural for which m < m.f(u) = f(power(m, u)) and so there is no upper bound, in {naturals}, on the outputs of f; likewise, by taking the multiplicative inverses of inputs with large positive output, we get inputs that f maps to arbitrarily large negative values; there is no bound on the range of f's outputs, unless f is zero.

Furthermore, any logarithmic (: f |{positives}) and any positive x with f(x) non-zero give us, for every positive natural n, power(1/n, x) which f maps to f(x)/n, whence we have inputs that f maps to non-zero values arbitrarily close to zero; and we can subdivide the range between two outputs, f(x) and f(r.x), arbitrarily finely with f(x.power(i/n, r)) for various i and n to establish that any logarithmic (: |{positives}) is continuous.

As any non-zero logarithmic ({reals}: :) has unbounded range, we can infer that any non-zero ({reals}: f |{positives}) is in fact ({reals}| f |{positives}), so every real k is f(v) for some v.

Now, for a logarithmic (: f |{positives}), suppose we're given x, y for which f(x) = f(y); thus f(x/y) = f(x) −f(y) = 0; thus f(power(n, x/y)) = n.f(x/y) = 0 for every integer n. If x and y are distinct, this gives us infinitely many inputs (forming a geometric sequence) that f maps to zero. If a positive r has power(n, r) = power(m, x/y) for any non-zero integral n, m, then n.f(r) = m.f(x/y) = 0, so f(r)'s order, in the output addition, is a divisor of (non-zero) n. That can be realised by f(r) = 0 with order 1, which trivially divides all n; otherwise, it can only happen if the addition has some non-zero values with non-zero order. Thus a logarithmic(: f |{positives}), whose output addition (e.g. real addition) has order zero for all non-zero values, takes value zero at every rational power of any input at which its output is zero. Thus, if a logarithmic function ({reals}: |{positives}) isn't monic, it has zero output at a dense sub-set of {positives} and continuity obliges it to be zero everywhere; so any non-zero logarithmic ({reals}: f |{positives}) is monic and, thanks to the orderings on {reals} and {positives}, necessarily strictly monotonic.

Now consider two homomorphisms from positive multiplication to real addition, logarithmic f, g each ({reals}: |{positives}). If they're both zero, then they're equal and, in particular, either of them is the other scaled by one of its outputs. Otherwise, one of them is non-zero and thus ({reals}| :{positives}), continuous and monic; suppose this is f. Since f(1) = 0 and f is monic, any x other than 1 has non-zero f(x) and we can divide g(x)/f(x). With y = power(r, x) for any rational r, we obtain g(y) = g(power(r, x)) = r.g(x) = r.f(x).(g(x)/f(x)) = f(power(r, x)).(g(x)/f(x)) = f(y).g(x)/f(x). Thus g(y) = f(y).g(x)/f(x) holds true for all y in a dense sub-set of {positives} and continuity of the expressions it equates requires it to hold true for every positive y. Consequently, given any non-zero logarithmic ({reals}: f |{positives}), any logarithmic ({reals}: g |{positives}) can be expressed as k.f for some constant real k; and this k is necessarily an output of f.

In particular, log(x) is logarithmic ({reals}: |{positives}) for every positive x ≠ 1; hence any logarithmic g maps each y to log(x, y).g(u)/log(x, u) for any positive u ≠ 1; and log(x, y)/log(x, u) = log(u, y), so g(y) = g(u).log(u, y) or g = g(u).log(u) for any positive u ≠ 1. If g(u) is 0, in such a case, then g is zero; otherwise, 1/g(u) is an output of log(u), i.e. 1/g(u) = log(u, v) for some v; whence g(u) = log(v, u) and g(y) = log(v, u).log(u, y) = log(v, y) so g = log(v). Thus any logarithmic mapping is either zero (and boring) or an output of log. Furthermore, every non-zero logarithmic g, by being log(v) for some v, gives us log(x, y) = log(v, y)/log(v, x) = g(y)/g(x) so generates log = (: g/g(x) ←x :). We only ever need one logarithmic function to describe all logarithmic functions.

The Natural Logarithm

I'll now derive a non-zero logarithmic function known as the natural logarithm and show how it produces some natural results, which shall give us a new way to understand (and generalise) the extension of power. The natural logarithm's orthodox name, ln, is an abbreviation of logarithm natural, probably originally in some other language where adjectives follow the nouns they apply to.

1/a 1/b a b The positive reals form a multiplicative group with a cancellable addition, whose completion will give us the reals. In the two-dimensional quadrant {lists ({positives}: |2)}, the set {[x, y]: x.y = 1} depicts the multiplicative inversion function, power(−1). This curve is invariant under a rescaling of the quadrant that scales each co-ordinate independently, provided the product of the factors is 1; thus, for any positive c, {[c.x, y/c]: x.y = 1} is the same curve. Let us now consider the area of any region bounded by: this curve, an edge of the quadrant and any pair of lines parallel to the other edge of the quadrant, at distances a and b from that edge, with a < b. Let the area thus enclosed by denoted A(a, b) for now. Since juxtaposing two sutch strips between the curve and the axis produces a wider strip, A(a, b) +A(b, c) = A(a, c) for all positive a, b, c with a < b < c. Note, by observing a rectangle inside the area and a rectangle that contains it, that (b−a)/b < A(a, b) < (b−a)/a.

Indeed, we can refine this bound: first consider any point [x, y] on the chord from [a, 1/a] to [b, 1/b]; it has

y = 1/b +(1/a −1/b).(b −x)/(b −a)
= 1/b +(b −x)/b/a
= (a +b −x)/b/a, whence
x.y −1
= x.(a +b −x)/a/b −1
= (a.(x −b) +(b −x).x)/a/b
= (b −x).(x −a)/a/b

which is necessarily >0 for a < x < b, hence x.y > 1 and y > 1/x on the interior of the chord, which thus lies above the curve, hence the area under the chord is greater than the area under the curve and A(a, b) < (1/b +1/a).(b −a)/2 = (a +b).(b −a)/a/b/2 = (b.b −a.a)/a/b/2 = (b/a −a/b)/2, at least for b > a.

Scale the curve in the two directions, to map the curve to itself; since this scales areas by the product of its two scalings of the two co-ordinates, and it's an invariant of the curve precisely when this product is 1, it preserves areas; consequently, A(c.a, c.b) = A(a, b) for all positive a, b, c with b > a. In particular, picking c = 1/a, we obtain A(1, b/a) = A(a, b), so A only depends on the ratio of its two parameters and thus can be re-written in terms of a function of that ratio, which I'll call ln = A(1) = (: A(1, x) ←x :), for which A(a, b) = ln(b/a). The bounds on A now give us 1 −a/b < ln(b/a) < (b/a −a/b)/2, for b > a, whence 1 −1/x < ln(x) < (x −1/x)/2, at least for x > 1.

As noted above, A(a, b) +A(b, c) = A(a, c) whenever 0 < a < b < c; whence ln(b/a) +ln(c/b) = ln(c/a), whence ln(x) +ln(y) = ln(x.y), just as for a homomorphism from addition to multiplication. Thus far, ln is only defined for inputs > 1; we can now extend its definition to positive inputs ≤ 1 by additive completion in the outputs, giving ({reals}: ln :{positives}) with:

This extends ln as a homomorphism from positive multiplication to real addition, making it logarithmic ({reals}: |{positives}). As it is non-zero, the reasoning above tells us it is log(e) for some positive e. By construction, A(a, b) increases monotonically as b increases or a decreases; whence ln(x) increases with x for x > 1. Since 1/x and ln(1/x) = −ln(x) decrease with increasing x, ln(1/x) in fact increases as 1/x increases, for x > 1; whence ln(x) in fact increases monotonically with x for all positive x.

Furthermore, since ln(b) −ln(a) = ln(b/a) and, for b > a, this lies between (b−a)/b and (b−a)/a, we have 1/b < (ln(b) −ln(a))/(b −a) < 1/a. Thus, for any closed interval (1-simplex) about an input x, the range of gradients of chords of ln within that interval lies within the interval between the inverses of its bounding inputs; which may be made as narrow as we like, though it always includes 1/x. Any other gradient differs from 1/x, so we can find a value between it and 1/x whose multiplicative inverse we can use as one end of an interval about x, within which no chord has this other gradient. Consequently, the intersection of gradient-intervals – that contain all gradients of chords of ln within intervals about x – is simply {1/x} and ln is differentiable at x with ln'(x) = 1/x. In particular, as it is everywhere differentiable, ln varies continuously; indeed, it is even smooth.

Now, as ln is logarithmic and non-zero, log(x, y) = ln(y)/ln(x) for all positive x ≠ 1 and y. As ln is monotonically increasing, ln(x) is positive precisely if x > 1 and negative precisely if x < 1; so log(x, y) is positive if x and y are both < 1 or both > 1; while log(x, y) is negative if one of x, y is < 1 and the other is > 1.

As ln is a strictly monotonically increasing homomorphism ({reals}| ln |{positives}) from positive multiplication to real addition, its reverse is also a strictly monotonically increasing mapping and a homomorhpism ({positives}| |{reals}) from real addition to positive multiplication; i.e. an increasing exponential. We shall, indeed, take it as the definitive exponential function, give it the name exp and express all other exponentials in terms of it.


Valid CSSValid XHTML 1.1 Written by Eddy.