]> Resistor Lattice

# Resistor Lattice

In XKCD 356, the nerd sniper postulates an infinite square grid with one Ohm resistors along the edges and asks for the equivalent resistance between two vertices in the grid. This is equivalent to asking what voltage must be applied between the two nodes to cause a current of one Amp to flow into the grid at one and out through the other. This looks like an elegant problem to consider, so I wilfully chose to be sniped by it. My first stab at it (back in 2016) got rather bogged down in a failure to show potential goes to zero at large radius. In 2019 I made some progress with that.

There's a standard computational way to solve such problems, known as relaxation, where one starts with some simple arrangement of the potentials at the vertices of the lattice and then iteratively changes them towards being more nearly a solution. In this case, we'd have the source and sink vertices with their given potentials and initialize all the other vertices at some chosen initial value – I'd pick half way between the source and sink values. After that (for reasons I'll come to below), at each iteration, we set the potential at each vertex, aside from the source and sink, to the average of the previous iteration's potentials for its neighbours. It seems reasonable to expect this to settle down fairly rapidly to a sensible pattern.

Of course, computationally, one tends to do this with a finite grid and trust that what happens beyond its edges is sensible. Such calculation suggests the net resistance is around seven ninths that of each ideal resistor in the lattice. The illustration here shows (the central portion of) such a solution, showing potential as red at the source, varying via magenta to blue at zero and then via cyan to green at the sink; each resistor's shade of yellow indicates how much current it carries. Colour varies as a logarithm of the magnitude it indicates, when non-zero. (When I initially used linear scales, only a very small region near the source and sink showed any discernible variation.)

### Co-ordinates

We have a lattice of vertices; into one we deliver one Amp of current, which flows out through another; let the former be called source and the latter sink. Each vertex in the lattice is a position: each difference between positions is a displacement; displacements can be added and scaled to get more displacements; or a displacement can be added to a position to get a position. We can characterise displacements between vertices by the number of steps each involves in each of the directions within our lattice. One displacement of particular interest is sink −source, the net displacement our current flows through, no matter how circuitous a path it takes to get from source to sink.

The length of a path wihin the lattice is just the number of edges it traverses; the path has minimal length, among paths with the same end-points, precisely if no two edges in it are in opposite directions. All minimal paths from a given start-point to a given end-point take the same number of steps in each direction; only the order of the directions of their steps varies. Thus all minimal paths from one given point to another have the same length, giving us a natural notion of distance between vertices in the lattice. (Technically, this is an L1 norm rather than the better-known pythagorean or L2 norm.)

Label as positive every direction that any step in a minimal path from source to sink takes, and label as negative any direction that is the opposite of a positive direction. For any direction in our lattice that this hasn't given a sign to, pick one direction as positive and the opposite as negative. For each direction, we can ask how many positive steps in that direction appear in a minimal path from source to sink: the answer is the same for all minimal paths. (I here tacitly assume a finite-dimensional grid; the infinite-dimensional case may be a bit trickier.) Order the directions of edges within our lattice by how many steps in each direction a minimal path from source to sink takes: if some directions have equal numbers of steps in such a minimal path, chose the ordering of these directions any way you like. We thus get an ordering of our directions which lets us label each direction by a member of dim = {0, 1, …, dim−1} and describe any displacement by the list that maps each i in dim to the number of steps a path, between end-points separated by that displacement, takes in direction i minus the number of steps that path takes in the opposite direction. This lets us express every displacement as a list ({integers}: |dim); and our choice of directions and their ordering ensures that the list that represents sink −source is monotonically non-decreasing and has no negative entries. The length of a displacement is then the sum of absolute values of entries in the list representing it; as sink −source has no negative entries, its length is simply its sum. Hereafter, I shall treat every displacement as synonymous with the list that represents it.

The zero-dimensional case of our problem has only one node, so has source = sink. In any dimension, if sink is source, our problem is trivial – we have a net resistance of zero. The one-dimensional case, even with distinct source and sink, is almost as simple: there is only one current path, along the line, so the current is constant between source and sink and zero outside that interval. Outside the interval the potential is thus that of the source on its side of the interval and that of the sink on the other side. In between, the potential varies linearly. The resistance is simply the distance from source to sink times the one Ohm resistance of the ideal resistor on each edge. Hereafter I shall assume dim is at least two and source ≠ sink – although sink −source may have some zero entries.

We can label the edges of our lattice by a mapping (({edges}: |{vertices}): edge |dim) for which edge(n, v) is the edge in co-ordinate direction n with v at its positive end; so edge(0, source) connects source −[1, 0, …, 0] to source, for example. In two dimensions, the four edges out of a vertex v are edge(0, v), edge(0, v +[1,0]), edge(1, v) and edge(1, v +[0, 1]). Let ({displacements}: unit |dim) map each co-ordinate index to the unit positive displacement in that co-ordinate's direction; so unit(n) maps n to 1 and all other members of dim to 0. Every edge connects two vertices that differ by unit(n) for some n: edge(n, v) connects v−unit(n) to v.

### Basic equations

Aside from source and sink, the total current into or out of any vertex is zero; at source we have one Amp coming in from elsewhere and leaving via the 2.dim edges out of the vertex; at sink, we have 2.dim edges delivering a total of one Amp which is leaving to elsewhere. Let (({reals}: |{vertices}): I |dim) encode the currents such that edge(n, v) has current I(n, v)/dim/2 Amps flowing from v−unit(n) to v; I(n, v) is negative if the current flows the other way. Let ({reals}: S |{vertices}) be the source-or-sink current, scaled by 2.dim/Amp, at each vertex, with S(source) = +2.dim, S(sink) = −2.dim and S(v) = 0 at each other vertix v. Then, for every vertex v, we have

• sum(: I(n, v) ← n |dim) +S(v) = sum(: I(n, v+unit(n)) ← n |dim)

as our statement of the conservation of current at each vertex, adjusted at the two end-points by S (total current in, on the left, equal to total current out, on the right, scaled by 2.dim/Amp).

Let ({reals}: V |{vertices}) map each vertex to its electrostatic potential scaled by 2.dim/Volt; since each edge has resistance one Ohm, the current along it is just equal (by Ohm's law) to the potential difference, so I(n, v) = V(v) −V(v−unit(n)) and the current equation above becomes:

• sum(: V(v) −V(v−unit(n)) ← n |dim) +S(v) = sum(: V(v+unit(n)) −V(v) ← n |dim), or
• S(v) = sum(: V(v+unit(n)) +V(v−unit(n)) −2.V(v) ← n |dim)

which, except at our end-points, says that V's value at each vertex is the average of its values at the neighbours of that vertex. (This is why the relaxation I mentioned in the introduction updates each vertex to the average of its neighbours.) At the end-points, V(v) differs from the average at v's neighbours by S(v)/dim/2 (which is why my scalings included a factor of 2.dim, making these values ±1 at source and sink, 0 everywhere else). Our aim is to discover the value of V(source) −V(sink); and we can reasonably expect this to depend only on sink −source and dim.

## A convenient symmetry

Before we dive into details, notice that a central inversion through the mid-point between source and sink, ({vertices}: source −v +sink ←v :{vertices}), interchanges source and sink while mapping every vertex to a vertex, hence every edge to an edge; so it preserves our lattice. If we combine this with replacing V by V(source) +V(sink) −V, so that the interchanged source and sink each get the other's value for V, we must actually get a solution to the same equations; this combination of transforming the vertices and V is a symmetry of the system. In particular, if we take any solution along with its image under this transformation, we can average the two; since they agree at source and sink, we won't change the values there. Since the equations are linear the average of two solutions is necessarily a solution; and, by construction, symmetric under our transformation. Thus, if there exist solutions, there exist solutions symmetric under this transformation.

Equally, if we take the difference between two solutions (notably including that between a solution and its image under this symmetry), we get a solution to the S ≡ 0 case of the sytem, with no source or sink and every node's potential equal to the average of its neighbours' potentials. While there exist non-constant solutions to this (e.g. where a constant electric field drives a steady current through the grid), they have infinite total power consumption so are unphysical. Indeed, as the system has no source of energy, its total power consumption must be zero; this is the resistance of each edge times the sum of the squares of the currents in all the edges. That sum must be zero, which is only possible if every edge carries zero current. So there is a unique physically plausible solution in the S ≡ 0 case. This in turn implies that, for any given choice of S, i.e. source and sink, there is a unique solution; and, since its image under our central-invert and reverse potential symmetry is also a solution, it must in fact be symmetric under this transformation.

The system provides at least one path from source to sink of length at most sum(sink −source); the resistance along this path is finite. Every other path from source to sink combines with this path, or some portion of it, in parallel; that necessarily produces a net resistance no greater than either of the parallel paths combined, so we can be sure that the over-all resistance is finite. Finite resistance and finite potential difference between source and sink implies finite total current from source to sink; multiply that by finite potential difference between source and sink and we can infer that the power delivered to our lattice of resistors is finite; hence that the sum of squares of currents in the edges of our lattice is finite. At large distances from source and sink, this implies that the average, over edges with at least one end a given distance r from the nearest of source and sink, of the squared current in an edge must drop off faster than 1/power(dim −1, d), since the number of edges averaged over grows as power(dim −1, d); for all but finitely many d, for any positive k, the edges with an end at distance d must contribute less than k to the sum of squared currents. We may thus infer that potential tends towards constant along the overwhelming majority of trajectories that pass away from source and sink.

### The zero-zone

In a symmetric solution, if we take (V(source) +V(sink))/2 as our zero-point for electric potential, the symmetric image of every point at a positive potential is a point at negative potential. Aside from the source and sink, each vertex's potential is the average of its neighbours' potentials, so any vertex surrounded by ones at positive potential is necessarily at positive potential; and likewise for negative. Thus a region of positive potential can't be surrounded by one of negative potential, or vice versa, unless source or sink is in the surrounded region. Every vertex v whose potential differs from that of any neighbour necessarily has some neighbours at higher potential and some at lower potential, so that their average matches the potential at v; every such vertex thus has at least one edge along which current arrives and another along which it leaves. No path through the network that goes always in a direction of positive current can form a loop (the sum of voltage drops along a loop's edges is zero and a sum of positives cannot be zero). Some such paths may end at infinity, but plenty of paths start at source, always follow the direction of positive current and end at sink. Along every such path, the potential decreases along every edge; it starts positive and ends negative, so must at some point pass through zero (possibly part way along an edge).

Therefore there exist edges that have non-negative potential at one end and non-positive potential at the other; let Z be the set of such edges. Z is necessarily symmetric under the central inversion through (source +sink)/2.

## Bounding at the boundary

Now consider a collection U of vertices; describe as its neighbours those vertices that are not in U but do have an edge in common with some vertex in U. We can take the maximum and minimum values of V on U and on its neighbours. For the purposes of this section, describe U as tidily bounded precisely if max(: V :{neighbours of U}) ≥ max(: V :U) and min(: V :U) ≥ min(: V :{neighbours of U}). This certainly holds true when U is {v} for some vertex v that isn't an end-point: in this case, the max and min of (: V :U) are both V(v), which is the average of V at the neighbours of v, hence in particular lies between the maximum and minimum of V's values at these neighbours. When we have a tidily bounded U, consider any neighbour v of U for which V(v) is the average of V on the neighbours of v, i.e. v is neither source nor sink. If V(v) is neither the maximum nor the minimum of V on neighbours of U then V attains its maximum and minimum, among such neighbours, at some other neighbours of U, which are necessarily neighbours of U&unite;{v}, which is thus also tidily bounded. If V(v) is both V's maximum and V's minimum on neighbours of U, then V is constant, with value V(v), on U and its neighbours; if all v's neighbours have this value for V, then U&unite;{v} is tidily bounded; otherwise, the neighbours of v have values of V whose average is V(v) and leaving out those whose value is this average won't change the average, so the neighbours of v that aren't in U or neighbours of U still have average V(v) and, as some of them are not V(v), at least one is greater and at least one is less, so U&unite;{v} is tidily bounded.

If V(v) is the maximum (and not also the minimum) of V on the neighbours of U, then V(u) ≤ V(v) for all u in U, in particular for all neighbours of v that lie in U. If V's value on each neighbour of v is equal to V(v) then, as long as v has some neighbour not in U, U&unite;{v} is tidily bounded. Otherwise, v has some neighbour at which V not equal to V(v); and V(v) is V's average over v's neighbours, so there is some neighbour w of v for which V(w) > V(v); by hypothesis, this w is not in U (since V(v) ≥ V(u) for all u in U) so U&unite;{v} has a neighbour on which V takes a value greater than V(v), hence than any value of V on U&unite;{v}. As long as V(v) wasn't also V's minimum on the neighbours of U, V attains that minimum at some neighbour of U other than v; and this is also a neighbour of U&unite;{v} since it is adjacent to some member of U (hence to some member of U&unite;{v}) and is neither in U nor equal to v. Thus U&unite;{v} is tidily bounded – as long as v has some neighbour not in U or some neighbour at which V is not V(v); and an exactly analogous argument applies, with the same caveat, when V(v) is the lowest value V takes on U's neighbours.

So, given a tidily bounded set of vertices, such as any single {v} with v neither the source nor the sink, we can always add neighbours (other than source and sink) of the tidily-bounded set to grow it while keeping it tidily bounded, as long as the neighbour we're adding has some neighbours outside the set; and this constraint is easy to satisfy, for example by going round the perimeter of the set so as to keep it roughly spherical (of the relevant dimension) at all times. We can't add the source or sink to the set, but we can surround them; they would then be neighbour points of the tidily-bounded set. If we take the union of two tidily bounded sets of vertices that are disjoint and neither of which has any neighbours i the other, the union's neighbours are just the union of the neighbours of the two sets and hence the union is also tidily bounded.

The upshot of all of this is that, on any set of vertices, excluding the source and sink, the range of values of V is bounded by values V takes at neighbours of the given set. In particular, considering the collection of all vertices other than the source and sink, we can reasonably expect V to take its maximum at the source and minimum at the sink; however, strictly, we can only build finite sets of vertices, so we can only reach this conclusion if we can impose some bound on V's behaviour far from source and sink. Reasonable physical intuitions suggest such bounds are possible, but first I pause to look at the solutions which violate those intuitions.

## Zero-point solutions

Pause to consider the system without any source or sink, leaving S out of the equations above: so V at each vertex is the average of V at that vertex's neighbours. We can add any solution to this to V without changing each node's difference from average, so the solution to the above is indeterminate modulo such a solution without source and sink. The obvious solution to this free system is that no current flows anywhere and all vertices are at a common potential. However, any covector on our co-ordinate space, i.e. any linear map ({reals}: f |{lists ({reals}: |dim)}), gives us a solution for V – since, for any vertex v and edge vector u, f(v−u) +f(v+u) is, by linearity, just f(v) −f(u) +f(v) +f(u) = 2.f(v), so each dimension's term in the equation of state above is zero. Such solutions correspond to there being a global constant electric field across the whole infinite array, driving a steady current flow through our grid, with each co-ordinate direction carrying a fixed proportion of that total current.

Now consider the composite F of such a linear map f with (: x.x ←x :), squaring its output; again, for any vertex v and edge vector u,

F(v−u) +F(v+u) −2.F(v)
= f(v).f(v) −2.f(v).f(u) +f(u).f(u) +f(v).f(v) +2.f(v).f(u) +f(u).f(u) − 2.f(v).f(v)
= 2.f(u).f(u) = 2.F(u)

which is constant; summing over edge vectors u, every vertex's value of F differs from F's average at that vertex's neighbours by the same amount. If we now subtract a similar square of another linear map with an equal and opposite difference, we'll get a solution to our equations; and any sum of such terms shall be a solution. A sum or difference of squares of linear maps is just a quadratic form; any of which can be written as g&on;h for some linear map h from our co-ordinate space to itself, with g being the standard metric induced by the basis (: unit |dim) of our edge directions, g(x, y) = sum(: x.y |dim). The quadratic form's value at vertex v is then H(v) = g(h(v), v) and the constraint we need to make it a solution of our equation of state is just that trace(h) is zero, since this is exactly what makes sum(: g(h(unit(n)), unit(n)) ← n |dim) = 0 and thus ensures, for each vertex v,

sum(: H(v+unit(n)) +H(v−unit(n)) −2.H(v) ←n |dim)
= sum(: g(h(v+unit(n)), v+unit(n)) +g(h(v−unit(n)), v−unit(n)) −2.g(h(v), v) ←n |dim)
= sum(: g(h(v) +h(unit(n)), v +unit(n)) +g(h(v) −h(unit(n)), v −unit(n)) −2.g(h(v), v) ←n |dim)
= sum(: g(h(v), v) +g(h(unit(n)), v) +g(h(v), unit(n)) +g(h(unit(n)), unit(n)) +g(h(v), v) −g(h(unit(n)), v) −g(h(v), unit(n)) +g(h(unit(n)), unit(n)) −2.g(h(v), v) ←n |dim)
= 2.sum(: g(h(unit(n)), unit(n)) ←n |dim)
= 2.sum(: H(unit(n)) ←n |dim) = 0

So every trace-zero linear map generates a quadratic form on our co-ordinate space that's a solution. Now, a quadratic form H is insensitive to the antisymmetric part of the g&on;h that generates it; if g&on;h is antisymmetric, H is zero. We thus have distinct solutions only for the g-symmetric trace zero linear maps h, which form a space of dimension dim.(dim +1)/2 − 1. Together with our linear solutions and an arbitrary constant offset, we now have dim.(dim +3)/2 degrees of freedom in solutions of the free equation.

Just as we can form a quadratic form from a second rank covariant tensor, ignoring its antisymmetric part, and get a solution provided the g-trace is zero, we can likewise use higher rank fully covariant tensors to form higher powers of our vertex, contracted with each rank of the tensor. In each case, evaluation at v+u and v−u shall yield terms which cancel when u fills an odd number of the slots but reinforce when it fills an even number of slots; when we then average over all neighbours, we effectively contract our tensor in each of its possible ways with sum(: bulk(×, ({unit(n)}: |2.i)) ←n |dim) for some i > 0 with 2.i ≤ dim, contracting the remaining tensor factors (if any) of our form with v. Thus the difference from average at each vertex is a function of position that's a sum of terms, each of which is a power of position from a tensor of rank lower by some positive even amount. For this difference to be zero at all positions, each of these powers must be zero, so the reduced tensor of each rank must be zero.

In one dimension, we only have the linear solutions since the quadratic form is a simple scalar times the square of the one co-ordinate and that scalar is its g-trace, which must be zero, so we only get the zero quadratic form. That suggests higher dimensions might have some limitation on the rank of tensor (power of form) that can work, based on dimension; but I'm unable to find such a limit. Let ({covectors}: n |dim) be the basis dual to unit; then quadratic forms such as n(0)×n(0) −n(1)×n(1) are possible (this looks like the function x.x −y.y). A cubic must have its sum of g-traces equal to the zero co-vector; both n(0)×n(1)×n(1) +n(1)×n(0)×n(1) +n(1)×n(1)×n(0) −n(0)×n(0)×n(0) and the equivalent with 0 and 1 swapped satisfy this (these look like the function 3.x.y.y −x.x.x and the equivalent with x and y swapped). A quartic's [0,0,0,0] and [1,1,1,1] cofficients must be equal and opposite, while the quadratic form obtained by contracting every which way with g's inverse, then summing the variations, must also be zero. The form corresponding to x.y.y.y −x.x.x.y works, though; I'm inclined to suspect similar forms exist of higher ranks.

However, all of these solutions, including the linear and quadratic forms, have V growing without bound at arbitrary distance from the source and sink. Although this can represent a real physical situation, as in the case of the linear solution representing a constant background electric field, all of them dissipate power at an infinite rate, so are unphysical.

In the one-dimensional case, at large distance from source and sink, V could increase without bound without total power consumption (proportional to the sum of squares of changes in V between adjacent nodes) being infinite, but it would have to do so less rapidly than linearly and the result wouldn't be a solution. In higher dimension, if V grows without bound the total power consumption is infinite. One might hope to be able to demand that V tends to some background value far from source and sink; however, the one-dimensional case's bounded solution has V constant at negative co-ordinate value, increasing linearly between source and sink, then constant again beyond the sink, with the two constants different; so there's no one value to which V tends at large distance in both directions.

So the way to eliminate the complications of the zero-point solutions is to assert the physically reasonable constraint that V must be bounded above and below by some constants.

## Refined bounds

Now, given some H and L for which H ≥ V(v) ≥ L for every vertex v, we should be able to establish that, in fact, V(source) and V(sink) will do as H and L, respectively.

First, let's get the one-dimensional case out of the way. In this case, source = [0] and sink = [s] for some natural s. There is a solution with V([x]) = s for x ≤ 0, V([x]) = −s for x ≥ s and V([x]) = s −2.x for 0 ≤ x ≤ s; you can add an arbitrary offset to this if you want. Any other solution differs from this by a solution U to the zero-point problem above; and each integer x has U([x]) −U([x−1]) = U([x+1]) −U([x]) whence, by induction, U([z+1]) −U([z]) is the same for all naturals z and U is linear plus a constant; since U is necessarily bounded, this implies it is in fact constant and the only other solutions are, as indicated, just the solution above with an arbitrary constant added. The maximum and minimum values of V are indeed V(source) and V(sink). We get a potential difference between source and sink of (V(source) −V(sink))/dim/2 = s Volts (I've only shown this for s > 0; but we saw, earlier, that this is also true for s = 0) and a total resistance of s &Ohm; between our two end-points (as was perfectly obvious from the statement of the problem). Hereafter, take dim > 1 as given.

In dimension higher than 1, current can flow out of source, even in directions away from sink, and find paths round the side of source and thus back to sink; likewise, current can flow round the back of sink to come in from the side opposite to source. These processes shall ensure that V is in fact strictly between V(source) and V(sink) everywhere except at source and sink. It remains to prove this. Observe, first, that V(source) > the average of V at source's neighbours; therefore at least some vertex v has V(v) < V(source); likewise, some vertex v has V(v) > V(sink). Meanwhile, remember that out system has a natural symmetry that interchanges source and sink; which makes it worth analysing in terms that preserve this symmetry. In particular, whatever solution we might consider, averaged with its image under this transformation, gives us a solution symmetric under it; so we can limit attention to solutions symmetric under this interchange.

Define ({sets of vertices}: K |{naturals}) as follows: let K(0) = { source, sink }; for each natural n, let K(1+n) be the union of K(n) and its collection of neighbours. K(0) is finite; for any n with K(n) finite, K(n) has at most finitely many neighbours; uniting two finite sets yields a finite set; so K(n+1) is finite whenever K(n) is; hence, by induction, every K(n) is in fact finite. With the co-ordinate choice above, we have ({0}: source |dim) and ({naturals}: sink |dim) with dim finite; so the L1 distance (i.e. along edges, rather than Pythagorean L2 distance) between source and sink is just sum(sink), which is some natural; for any natural n for which 2.n ≥ sum(sink), K(n) includes all the vertices on paths from source to sink of minimal length; in particular, K(n) is connected (i.e. there's a path from any vertex in it to any other; by specification, every node in K(n) is connected, by a path of length at most n within K(n), to either source or sink; and a path from source to sink suffices to complete paths between nodes reached from one to those reached from the other). Thus, for all sufficiently large n, K(n) is connected.

By restricting attention to solutions symmetric under our interchange of source and sink we can constrain V on the boundary; for every high value there must be (via the transformation) a low one diametrically opposite round the boundary, and vice versa. In the case of K(n) for 2.n < sum(sink), the relevant diametrically opposite interchanges the portion of boundary around source with the one around sink.

For n small enough that 2.n < sum(sink), there is no path in K(n) from source to sink and K(n) consists of two separate connected components, one centred on each of source and sink. Looking at these components with source and sink excluded, V on each component is bounded by neighbours of that component. We already know source has a neighbour at which V is less than V(source); so source is never the neighbour of its half of K(n), with source and sink removed, at which V attains its lowest bound among neighbours; likewise for sink; so I can skip mention of eliding source and sink from their respective halves when discussing V's minimum on source's half or maximum on sink's. Now, source's half of K((1) is source and its neighbours, at least one of which has V below V(source); so consider an n, with 2.n < sum(sink), and V at some neigbhour of this half of K(1) is ≤ this, hence also < V(source); this neighbour (when sum(sink) > 2) the lowest value of V on source's half of K(1) is less than V(source), hence so is V's lowest value on K(1) with source excluded; V attains this (fatuously, since all but source and sink of K(1) is boundary) on its boundary. Suppose, now, that V attains its lowest bound on source's half of K(1)

For each natural k, let M(k) be the collection of vertices in K(k) on which V attains its maximum. K(k) includes source's neighbours, at least one of which has V < V(source) and hence, in particular, less than the maximum; so M(k) is not all of K(k). Thus M(k) has some member m that has a neighbour in K(k) that is not in M(k). If any such m has all its neighbours in K(k), so that none of them has V > V(m), the fact that one of its neighbours has V < V(m) implies that V(m) > the average of V at m's neighbours, so m is in fact source. Thus M(k)'s potential members are source and the members of K(k)'s boundary. In particular, if V attains its global maximum (or minimum), the same argument implies it does so only at source (or sink), since we are left with no boundary members as alternative places at which to attain it.

The remaining possibility is that M(k)'s boundary members are on the boundary of K(k); this implies that M(k) is a subset of K(k)'s boundary, given that some interior points of K(k) are not in M(k). So, regardless of whether it's greater than V(source), for each k, let W(k) be V's maximum on the boundary of K(k) and suppose there's somek for which W(k+1) > W(k). Where V attains its maximum on K(k+1), we can classify neighbours as external if they're not in K(k+1), bounding if on its boundary and internal if otherwise in K(k+1); the external neighbours are all in the boundary of K(k+2) and the internal ones that of K(k), so V is at most W(k) on the internal ones and W(k+2) on the external ones. The average of the internal nodes is at most W(k) and that of the bounding nodes is at most W(k+1) and the external nodes, whose average is at most W(k+2), have to bring the overall average up to W(k+1). When there are as many internal as external neighbours, this implies W(k+2) −W(k+1) ≥ W(k+1) −W(k); this, repeated for each successive k, would lead to W(k) growing without bound, contrary to V being bounded. However, there may well be more external than internal neighbours. A cubic corner, with no internal neighbours, can't arise given how we constructed K; but we can get an octagonal (albeit with power(dim, 2) in place of the 8 implicit in that oct- prefix) corner with one internal neighbour and no bounding neighbours, hence 2.dim−1 external neighbours. Successive changes in W decreasing by a factor of 2.dim−1 would add up to a total change bounded above by 1/(1−1/(2.dim−1)) = (2.dim−1)/(2.dim−2) = 1 +1/(dim−1)/2 times the first step. That this can't really work arises from the corner's neighbours not being able to pull the same trick as the corner.

## Try again

We can at least set an upper bound on the resistance: we have a path along which the resistance is 3.R and all other paths act in parallel with this (or with parts of it), which can only reduce the resistance. So the resistance is at most 3.R.

We can also approximate our infinite grid by various finite grids; these, again, give upper bounds on the actual resistance. The central two squares give a diagram in which we get current 2.I go round two sides of each square, 3.I along the other from source or to sink and the difference, I, along the middle edge shared by the two squares; along each path from source to sink, the potential changes is 7.I.R and the total current is 5.I, so the resistance is 7.R/5. The four-by-three grid of squares with those two squares as its centre gives 959.R/1023 as a more refined upper bound. As we add successive layers of squares outwards, we should get successively refined upper bounds.

Written by Eddy.