]>
Suppose we have
The last may be construed as a rate of variation in the total energy inside the volume. The pressure may be assumed to be constant throughout the volume (albeit this involves supposing the density of the gas times any difference in gravitational potential within the volume is negligible compared to the pressure).
The analysis of such a system follows the standard form of statistical physics. It remains to introduce an analysis of the single particle states of the system, in each of which a single molecule of one component of the gas is studied. It suffices to characterize a set of such states that form, via superposition, a basis of the possible states. Since energy is one of our observables, we can factorize our solutions of Schrödinger's equation into a simple exp(−i.E.t/ℏ) time-dependence combined with a time-independent spatial field, as long as the volume's size and shape are constant.
The fact that our gas is constrained to the volume amounts to the potential being a function that's zero inside the volume and infinite outside. Since it's constant in the interior, the spatial solution is simply a superposition of plane waves, each of constant amplitude.
First, consider the case where the volume of space is a cuboid, with mutually perpendicular edges ({vectors}: b |dim), in which dim is the number of space-like dimensions of space-time (which you're welcome to believe is three). Analysis of the single particle of gas considered as a particle imposes no constraint on its possible momenta or, within the box, positions. Analysis of it as a wave, however, requires it to form a standing wave within the box, to avoid destructively interfering with itself. Assuming reflection off each wall of the box with a node of the wave at the wall, we find that the particle's wave covector must map each edge vector, b(i) for i in dim, to a multiple of a half turn. (We get the same constraint if we require an anti-node at each wall.) The covectors satisfying this constraint form a lattice, {sum(n.q): ({integers}:n|dim)} for some ({covectors}:q|dim) for which q(i)·b(j) is zero unless i = j, in which case it is a half turn.
Each such covector describes a particle going in one direction; since the particle must not leave the box, we must combine the solution this covector describes with its mirror images in the walls of the box to yield a solution which describes the particle bouncing around inside the box. Thus every actual solution combines power(dim, 2) of these covectors.
These feasible wave covectors imply feasible momenta {h.sum(n.q)/g:
({integers}:n|dim)}, where g is the metric of space-time, which turns each
q(i) into a vector; since the b(i) were given to be orthogonal, each q(i)/g is
in fact parallel to b(i). The lengths of each q(i)/g is just a half turn
divided by the length of the matching b(i); their product is thus power(dim,
turn/2) divided by the volume of our cuboid. Our lattice divides up the space
of possible momenta into cuboid boxes, each with volume power(dim, h) times
the product of lengths of the q(i), so power(dim, h.turn/2) divided by the
volume, V, of our cuboid. We have one feasible momentum per power(dim, 2)
boxes of this size, hence a density of V/power(dim, h.turn) states per
unit volume
in momentum space.
The case of non-cuboid volumes is somewhat trickier to analyze; and, in any case, these pure momentum states only provide a rough guide to the actual gas, since its particles' collisions with one another (even when elastic) would preclude their actually being in such a pure momentum state. However, it seems reasonable to conjecture that the density of states remains V/power(dim, h.turn) in momentum space even in the more complex case, if only because this expression of the density in terms of V is independent of the lengths of individual sides of the box. We may also justify the conjecture by considering decomposing our more irregular volume up into a large number of mostly cuboid volumes and summing the state densities they imply. In any case, let us take it that we have a basis of our possible states for which the number of basis members per unit volume of momentum space is uniformly V/power(dim, h.turn).
Up to this point, we haven't actually needed the fact that the gas is made of physical molecules or atoms; it could equally have been a gas of photons, for which energy is just the speed of light times the magnitude of the momentum. For the present, let's suppose we're dealing with a gas of particles, each of which has positive rest-mass m.
The kinetic energy of such a particle with momentum p is g(p,p)/m/2 and
our potential is zero throughout our volume of space, so a basis state at
location p in momentum space has eigenvalue E = g(p,p)/m/2 of the energy
operator. (For present purposes, I'm taking the energy operator's
arbitrary zero point
to include the energy implicit in the mass of the
gas, as part of the background.)
In the cuboid box, a basis member with momentum p = h.sum(n.q)/g exchanges momentum 2.h.n(i).q(i)/g with each wall perpendicular to b(i) each time it hits one; its speed perpendicular to these walls is h.n(i).q(i)/g/m and it travels a distance equal to twice the length of b(i) between collisions with either of these walls, so takes a time 2.m.length(b(i))/length(q(i)/g)/n(i)/h between collisions with each wall; so it contributes a net force on each wall perpendicular to b(i) of magnitude
which we'll be dividing by the area of the wall, V/length(b(i)), to get a pressure contribution
Now, g(p, b(i)) = h.n(i).q(i)·b(i) = h.n(i).turn/2, so let u = (: b(i)/length(b(i)) ←i |dim) and we can re-write this pressure contribution as power(2, g(p, u(i)))/m/V; and note that summing this over i in dim simply yields g(p, p)/m/V or 2.E/V.
The particles of our gas may have internal state (e.g. the atoms making up
its molecules may wobble about in various ways); I take it that one of the
internal states is a ground state
, with least internal energy, whose
rest-mass is what I've tacitly used as m; all other states have higher
energy. For each of the momentum-space states discussed up to now, we get one
state per internal state; the internal state doesn't contribute to momentum
(or pressure) but does contribute its surplus energy to the total energy of
the over-all state. The possible values of this surplus depend on the
particular types of internal dynamics: for wobbling molecular bonds,
a simple harmonic oscillator's evenly-spaced
energy levels may be an adequate model, while excitation of the electrons
(typically in hotter gasses) would typically give
answers proportional to differences of inverses of squares of
integers.
Crucially, the presence of internal state implies more states at each given momentum; this is particularly relevant for fermions, as no two of them can have identical state, but differences internal state will let two of them have the same momentum. Suppose, then, that we have a basis M of our internal states; each basis state of an individual particle is a product of a member of M and one of the momentum-states described above.
So now we know how a particle of momentum p contributes to energy and pressure, and we have a basis C of our single-particle states with density V/power(dim, h.turn) in momentum-space, per internal state in M. We can now apply the usual machinery of a system of many indistinguishable parts. For each c in C we obtain a number operator n(c) that's a formal observable giving the number of particles in state c; and all other observables can be obtained from (:n|C) and the values of the observables on the single-particle states. We have some specific set J of actually observed observables, with (:Val|J) giving the values observed, and we take as given that the chosen bases of states are eigenstates of these observables, hence each member of J is diagonal with respect to the whole-system states derived from C.
So let's start by supposing all we've observed, aside from the volume of
the box, is the total energy, E, and the amount of gas in the box, N
particles; so J = {N, E}. A typical distribution on J can then be represented
by the scalars by which it multiplies a function's images of E and of N, prior
to adding them to get the distribution's integral
of the
function. Let's ignore the distributions that ignore E and assume its
multiplier, β, is non-zero; we can then (for convenience in what comes
later) express the multiplier of N as β times some other factor, say
−μ. So our typical distribution h on J is (: β(f(E)
−μf(N)) ←(:f|J) :). We evaluate this on J (construed as a
mapping ({observables}: j←j |J), with {observables} a vector space) to
get h(J) = β(E −μ.N) and apply exp to get Z(h) = exp(h(J)) =
exp(β.(E −μ.N)). (The observables E and N are linear maps from
a Hilbert space to itself, hence amenable to exponentiation.) Taking the log
of the trace of this and differentiating the result (with respect to h, as a
function on distributions), we must then select a distribution for which the
derivative agrees with Val, our observed values.
The general solution selects a distribution h on J for which, for each X in J,
where F(X, c) is the eigenvalue for observable X in state c, σ(c,X(c)) when σ is the (Hilbert-space) metric on our space of single-particle states, span(C). The ± is + if our particles are fermions or − if they're bosons. One of our members of J is N, for which F(N, c) = 1 for every single-particle state c; the other is E, with F(E, c) being the energy level of state c; so exp&on;h(−F) = (: exp(β.(μ −F(E,c))) ←c |C) and we get:
We know Val(E), Val(N) and all F(E, c), so our two unknowns β and μ should be determined by these two equations.
Given the values of (:Val|J), we can infer expected values for all other observables, (:K|{observables}); in particular, for the observable n(c) which counts how many particles are in any given state c, we get
with −h(F, c) = β.(μ −F(E, c)), so
and we can infer any other observable's expected value as K(X) = sum(: K(n(c)).F(X, c) ←c |C).
Written by Eddy.