]>
The standard reading of a list of
coefficients as a polynomial uses each
entry f(n) in a list to scale power(n) in a
sum. When we do the same for
sequences – mappings (: |N) either with N natural, making it a list,
or with N = {naturals}, making the sequence infinite – we can still define
the formal algebra for manipulating such lists by the same rules as apply for
polynomials. Such a sequence, when so understood, is described as
a power series
. However, if we actually scale each entry
in the sequence by its index's power, we may have problems summing the resulting
sequence of functions.
Since the addition of functions (whose outputs we presumably know how to add) is defined pointwise, summing the sequence of functions will only make sense in so far as, when we evaluate each function at an input, x, we are able to make sense of the sum of their outputs, which shall in turn depend on the sequence of partial sums, (: sum(: f(i).power(i, x) ←i |n) ←n :{naturals}), converging in some sense. That may work only for some limited sub-set of the inputs individual powers would otherwise have accepted; other inputs may give rise to partial sum sequences that don't converge. So the function represented by an infinite sequence of coefficients for power might accept a more limited range of inputs than the individual powers. It will always accept any additive identity among your inputs for power, since power(n, zero) = zero for all positive n, so sum(f.power)(zero) = f(0), the coefficient of power(0), which maps every input (even zero) to a multiplicative identity.
However, before attending to the details of when and where the associated function is defined, let's first look at how the algebra on sequences – that would define such functions, in so far as it can – works. In particular, let's chose a different spanning set of the functions under discussion, that makes some of the things we tend to do with power series easider to deal with.
Given a ringlet, mappings from its values to its values form a a module over the ringlet; each of the natural powers constitutes a mapping in this module; the polynomials over that ringlet are the span of these natural powers. As span only deals in finite sums, this span does not include the functions that correspond to infinite power series (in so far as any do); however, in so far as power series do define such functions, these functions are necessarily limit points of that span, so the polynomials are dense in the space of functions defined by power series.
As the polynomials are defined to be the span of the natural powers, they provide a standard basis. (That basis might not be all of the powers: it's possible, in any given ringlet, that some power may be equal to a polynomial of lower order, in which case the earlier powers constitute a basis and all later powers can be expressed in terms of it. When our ringlet in fact has an ordered field as a sub-ringlet, however, this does not arise.) However, we can chose any basis (or, indeed, spannning sequence; it doesn't have to be linearly independent, that just eliminates ambiguity from the coordinates) we like for that span, or indeed for any span of which it's a sub-space; that will, again, give a reading of lists of coefficients (the co-ordinates with respect to the chosen basis) times corresponding basis members. As long as the new sequence, replacing power, does span at least the polynomials, the power-series functions shall remain limit points of the span.
One of the ways to change basis is simply to scale each member by a factor, which may vary with the index of the member; for example, one may scale each power down by the factorial of its index to use power/factorial = (: power(n)/n! ←n :) as basis; the function that the standard reading represents by a list f of coefficients gets represented, with this scaled basis, as f.factorial = (: f(n).n! ←n :); which, of course, changes the formal rules for combining lists of coefficients (without mentioning power) that express polynomial multiplication, taking the standard basis for granted. One representation has a list f represent sum(f.power), the other has it represent sum(f.b) for whatever spanning sequence b we may have chosen.
With any basis ({polynomials}: b :{naturals}), we represent a polynomial by
a partial list ({coefficients}: h :n) for which sum(h.b) = sum(: h(n).b(n)
←n :) is the polynomial. By expressing each b(i) in the standard form and
using the arithmetic there, we obtain polynomials b(i).b(j), for various i, j,
described in the standard form; when we express these in our b-based
co-ordinates, by an h for which each b(i).b(j) = sum(h(i, j).b), the values of
the h(i, j) then determine how that multiplication is performed, using the
co-ordinates based on b. As for multiplication, so also for all the other
things we can do with polynomials; by transforming co-ordinates back and forth,
we can encode in any of these descriptions everything the other can express,
albeit the expression is appropriately transformed for the new encoding. One of
the things we can do with polynomials in standard form is differentiation; with
D as is the derivative of
, D(sum(f.power)) = sum(: f(1+n).(1+n).power(n)
←n :) and repeating this m times gives us repeat(m, D, f.power) = sum(:
f(n+m).power(n).(n+m)!/m! ←n :).
Let's put that in the co-ordinates of my previously-suggested b = power/factorial = (: power(n)/n! ←n :) and see if its use of the factorial can tame the mess of factorials in the co-ordinates expression of derivative. We get D(sum(: f(n).power(n)/n! ←n :)) = sum(: f(n+1).(1+n).power(n)/(1+n)! = f(n+1).power(n)/n! ←n :); repeating that m times gives us sum(: f(n+m).power(n)/n! ←n :), so differentiation just shifts index in our new representation; repeat(m, D, f.b) = (: f(n+m).b(n) ←n :) just discards the first m entries in f. So this new basis makes differentiation simple.
Now let's see what this representation does to multiplication and for binomials. Let's start with how those are expressed by the standard representation:
in which F(i, j) = (i +j)!/i!/j!, giving some hope that the second will be simplified here, too. So let's look at that first:
which is just the standard form's equivalent without its F factors; so let's see how much our new representation costs us on multiplication:
So the price of the new representation is that it moves F-factors from the binomial rule to the multiplication rule – it thus makes the more complex rule simpler while making the simpler rule marginally more complex, so the two ruels are closer to each other in level of simplicity – but it behaves wonderfully well with derivatives, just shunting coefficients down one index without any scaling. That makes it particularly interesting, as a representation, when differentiation (or integration) is involved; it will also (thanks to the binomial rule) do so whenever we're interested in how our function's value at a sum of inputs relates to the function's values at each of those inputs.
Now, the basis members have ever-larger denominators, so we may think of
them as shrinking away; however, each has ever-larger power, pulling in the
opposite direction, at least for large enough
inputs; while power(n,
1)/n! gets smaller as n gets big, power(n, x)/n! necessarily grows huge for
large enough x (when x ≥ power(1/n, n!), more or less). As far as
differentiation is concerned, of course, they're in some sense equally large,
since it maps each basis member to the previous, save for the first, which it
maps to the zero polynomial, represented (in any co-ordinates) by the empty
list. In those terms you could consider power(0)/0! the smallest (it's mapped
to zero), in which case power(1)/1! is the next smallest, and each successive
power(n)/n! is bigger than all earlier ones. So there's no decisive grounds
for deciding the respective sizes of the basis members (in either
representation). Note the each power/factorial basis members is smaller
than the corresponding power basis member; and each is so by a steadily larger
factor.
In consequence of this, sum(: power(n, x)/n! ←n :{naturals}) converges for every x (in reals and a great many other contexts) and the function exp which maps each x to this sum is its own derivative.
Factorial grows faster than any polynomial. Proof:
First consider any simple power, compared to factorial by pointwise division, (: power(m, n)/n! ←n :(|>:{2.m})), for some natural m. For each input n to this, there is some natural i > m with i+m ≤ n ≤ 2.i, for which the m factors i+1, i+2, …, i+m in the denominator (between the 1, 2, …, m, m+1, … i and i+m+1, i+m+2, …, n factors making up n!) are all > n/2; let i be the least such and divide the numerator by power(m, n/2), to get
in which the final product is just n!/(i+m)! written out without division. Now, there are more than m factors in i!, only the first of which isn't ≥ 2; the first four give product 24 > power(4, 2) so the product of the first m of them is > power(m, 2) and 1 > power(m, 2)/m! when m ≥ 4. As i > m, for i > 4 (i.e. as long as both 8 and m are < n), power(m, 2)/i! < m!/i! and
whose denominator has i −m factors each ≥ m and n −i −m terms, each > i +m. So this denominator ≥ power(i −m, m).power(n −i −m, i+m). As i+m > m this is > power(n −2.m, m). Hence
whence power(m, n)/n! decreases with n for n > 2.m, necessarily tending towards zero. Furthermore,
at least for all large enough m. This may come in handy when studying convergence of power series. Still, back to the plain, unsummed, value of one power, divided by factorial: for each natural m, the sequence (: power(m, n)/n! ←n :) converges to zero. Clearly multiplying it by some fixed scaling isn't going to change that. A pointwise sum of sequences, each of which converges to zero, will likewise converge to zero; hence any polynomial (restricted to its action on natural inputs, possibly understood as their images in some ringlet) divided pointwise by factorial (and interpreted as a sequence) likewise converges to zero – i.e. factorial grows faster than any polynomial.
See also: how rearranging the terms in an infinite sum can change what they converge to.
Written by Eddy.