I’m also interested in the memory characteristics of these two algorithms. What’s the asymptotic memory performance of HashLife/QuickLife?

]]>If many / most / all Life starting arrangements have a similar 2-phase evolution then Gosper will win once we get far enough out into the second phase to make up for Gosper’s gross underperformance in the early phase . But if the phase change point is at a few million iterations then the point where Gosper surpasses QuickLife will be way, way out there. Perhaps farther than we can stand to wait for.

And what of patterns that don’t have this biphasic behavior? Do such exist that remain chaotic forever? Or that oscillate between regular and chaotic phases? Of all possible starting patterns in, say, an 8-quad (32k x 32K cells), how many of the “interesting” ones are “Gosper-compatible”?

Not that I expect you to have answers to these questions, but they’re interesting in their own right as to Life and, more importantly, bear directly on your overall pedagogic objective to talk about algorithm design informed by the nature of the problem to be solved. In this case we may almost care more about the “meta-nature” of the problem, i.e. it’s phasic (or not) nature, rather than the lower-level “obvious” nature of how individual cells and groups of cells evolve over some range of ticks.

In non-trivial software solving non-trivial exploratory problems the meta-nature (and meta-meta-nature?) may be the real drivers for design and for performance. Most especially at serious scale, with which so much of modern computing is concerned. In a sound bite: “How to know the meta-nature? Aye, there’s the rub!”

]]>> I do dearly wish that C# or other languages would make an efficient “small set of bits” type that was a byte, ushort, uint or ulong behind the scenes, but gave you a high-level interface

What about BitArray/BitVector32? 🤔

]]>Yes I was also thinking about the “waste” of cells when going from an n-quad to an (n-1)-quad. When calculating a step of an n-quad, in principle you need to “lose” only a single row of cells around the perimeter. When using only the middle (n-1)-quad, you’re left with only a quarter of the number of original cells. The bigger n is, the bigger the difference between those values. It seems like we’re wasting information here that could be exploited at little or no cost — somehow.

]]>That’s a really interesting idea. I see no reason why it would not work in principle. Give it a try if you like and report back the results!

]]>I’m not sure whether this would be more efficient, but it seems like it would do less rebuilding of quads, anyway.

]]>The change to four Step()s instead of nine has a nice duality with your current implementation – see the gist under my username.

But performance testing shows that the extra work in extracting the right quads

more than compensates for the savings in Step() calls – performance is worse :). Using the nine n-2 quads, the border eight n-1 quads are reused for the neighboring steps, so we have more benefit from the caches.

That extraction is a bunch of steps but it is certainly possible, and if we did so, we would indeed go from nine steps per recursion to four; this is a good observation.

That said: there is a reason why I described the algorithm the way I did! Tune in next time for the answer.

]]>