Further directions

I got this far using only 3 small programs and 171 manual GRF evaluations. The programming was fun the same way Project Euler programming is, but with less sense of insight. The manual evaluations were mostly trivial, occasionally interesting, never tricky, and in aggregate tedious.

Where to go from here? I could extend the table up to 10 by just burning a reasonable amount of CPU, but it gets unreasonable at 11. For that I’d have to integrate things so that the generator knows how many functions simplified down to the one it’s asking the evaluator to compute.

Or I could extend the simplifier to be able to match more sophisticated patterns. Another idea is to do some static analysis that marks the output of S as “not zero” and traces it through the expression and marking it non‐terminating if it reaches an M. And static analysis that detects an argument to M which doesn’t evaluate its own first argument.

I could extend the list of positive‐arity functions that simplify. The most ambitious path would be to automate that list’s generation. The least ambitious path would be to manually work out the 96 size‒8 GRF’s the evaluator can’t evaluate and simplifier can’t simplify.

The most curious thing in all this is that very small Turing machines generate really wacky behavior. Yet these small GRF’s, at least up to size 9, all have behaviors that are extremely pedestrian in comparison. Why?


[Top]