BMO1 Type problems are Turing Complete

Beaver Math Olympiad #1 (aka BMO1, wiki) is an unsolved 6 state Turing Machine, which starts at (1, 2) and repeatedly iterates the function

A(a, b) -> A(a-b, 4b+2) if a > b
A(a, b) -> A(2a+1, b-a) if a < b
A(a, b) -> Halt if a = b

It appears that it is quite hard to determine if BMO1 halts. Recently, several of us on the discord came up with a generalization of problems of this type which is Turing complete, which I named Linear Inequality Affine Transition Automata (LIATA), showing the general class of problems of this type is uncomputable. This grants some heuristic evidence that problems of this general form may be hard.

History

The development of ideas went thusly: discord user Bard initially posted a reduction from Minsky-style counter machines to 3 variable BMO-type problems. Then, mxdys posted a method to reduce the number of variables required to 2, as in BMO1, which I wrote up in detail, and introduced the term LIATA (reproduced below).

Subsequently, it was pointed out by savask that the LIATA concept was already studied in the literature under the (more succinct!) name Piecewise Affine Functions (PAFs). Currently, review of the literature is ongoing. I will stick my neck out a little and note that, in my view, it appears unlikely so far that any of the known methods suffice to solve BMO1 without some new idea.

Results from the literature

However we have learned a few things from the literature. We can characterize a PAF by the number of dimensions (2, in this case) and the number of regions (also 2, where the function is identically zero outside those regions). Here we assume all PAFs are over the integers, though other cases are also considered in the literature.

First, hard problems. Mortality is the question of whether all trajectories of a PAF eventually reach 0. Mortality of a PAF with 2 dimensions and an arbitrary number of regions, or 2 regions and an arbitrary number of dimensions are both known to be uncomputable.

For computable problems, in one dimension, mortality is computable (PSPACE-hard, in fact). Mortality is also decidable if the function is affine everywhere, ie if there is one region that encompasses the whole space (note this is weaker than the sense of “region” used above).

However, the difficulty of mortality for a function which has one region and is zero outside that region is unknown over the integers (over the reals, it is decidable), as is the difficulty of mortality in specifically the 2 region, 2 dimensional case.

References

For more information on the results cited here, see
Mortality of Iterated Piecewise Affine Functions over the Integers: Decidability and Complexity by Amir M. Ben-Amram (https://d-nb.info/1365366227/34)
Termination of Integer Linear Programs by Mark Braverman (https://mbraverm.princeton.edu/files/terminationCAV.pdf)

2 Likes

Here is the full proof that a LIATA can simulate a Minsky Machine.

Motivated by the Beaver Math Olympiad #1 (BMO1), we define a linear-inequality affine-transformation automaton (LIATA) as a starting vector \vec x = (x_1, x_2, \ldots , x_n) in \mathbb{Z}^n and a list of m transformation rules. A transformation rule is a list of inequalities
a_1 * x_1 + \ldots + a_n * x_n \,\{\lt,\leq,=\}\, c for constants a_i, c and an affine transformation, which may be specified abstractly as an n\times n matrix M and a length n vector \vec v providing a constant offset. The idea of a transformation rule is that whenever \vec x satisfies all of the inequalities, we apply the affine transformation \vec x \rightarrow M\vec x + \vec v. Accordingly, we may specify that the linear inequalities define non-intersecting regions of \mathbb{Z}^n.

Then the automaton is evaluated by starting with \vec x and evaluating the applicable transformation rule until none apply, at which point the automaton has halted.

A concrete example of such an automaton is the one from BMO1, which is specified by the following data:

\vec x = (1, 2). We will refer to this as (a, b) for ease of notation.
if a > b then (a, b) \rightarrow (a - b, 4b + 2)
if a < b then (a, b) \rightarrow (2a+1, b - a)

You can see that the automaton halts if a = b. It is currently unknown whether the automaton above, which corresponds to a 6 state turing machine, halts.

A different very simple model of computation is known as a Minsky Machine, which is a type of “counter machine”. A Minsky Machine has r registers which store an unbounded positive integer and k instructions, numbered from 1 to k. Each instruction is either:

  • i: Increment (r_i, k_i), which means add 1 to r_i, then go to instruction k_i.
  • i: JumpNonzeroDecrement (r_i, j_i, k_i), which means check if r_i is zero, if so, go to instruction j_i, else decrement r_i and go to instruction k_i.
  • i: Halt, which means the machine halts.
    (note that the i in the above is the instruction number)

It is known that Minsky Machines are Turing complete, even when the number of counters is restricted to 2. The simplicity of the model makes them a useful way to prove new computational models are Turing complete. In particular we show that the linear-inequality affine-transformation automata are Turing complete, even when the dimension of \vec x is 2, by describing how to turn a given 2-counter Minsky Machine into a dimension 2 LIATA.

The key idea is to represent the two counters of the Minsky Machine as the two dimensions of the LIATA. In particular, the “current memory” of a Minsky Machine is (x, y, k) for the two counters x and y and the next instruction to be executed k. So k \leq K is a finite number, while x and y are unbounded. The current memory of a 2D LAITA is (a, b). This (a, b) will represent any (x, y, k) via the following transformations:
a = x + y + 1
b = x + k*a

Now we need to describe how to translate each instruction of the Minsky Machine into one or more transformation rules, and prove the evolution of the LAITA is the same as the Minsky Machine under the transition rules.

For each step of the LIATA, we first must determine what k is, so we know which MM instruction to execute. This is always possible, as we constructed a st a > x, so that b would be in [k*a, (k+1)*a). So our in progress description of our LIATA can look like:
k * a \leq b \lt (k+1)*a \rightarrow execute instruction k.
It is easily seen that these inequalities are mutually exclusive.
If instruction k is Increment$(r, j)$, we must first increment r and then jump to instruction j. We’ll describe how to jump to an arbitrary instruction later. Incrementing r involves adding appropriate values to a and b, depending on what the coefficient of r in the above definition of a and b in terms of x and y. For example, to increment x:
a \rightarrow a+1
b \rightarrow b + k + 1
We are allowed to use k here, because each inequality only applies to one specific value of k. Similarly, to increment y:
a \rightarrow a + 1
b \rightarrow b + k
If instruction k is Halt, then the translation is similarly straightforward: include no transition rule at all.
For JumpNonzeroDecrement$(r, j, k)$, owing to the complexity of the instruction, the translation is somewhat more complex. In particular, we will need to refine the inequality given above to determine if r is nonzero. If r is x, then again expanding the definitions of (a, b), x is zero just when
a = y+1
b = k * a = k * (y + 1)
So the previous
k * a \leq b \lt (k+1)*a \rightarrow execute instruction k
becomes
k * a = b \rightarrow execute instruction k with x = 0
k * a \lt b \lt (k+1)*a \rightarrow execute instruction k with x positive.

Similarly, if r is y, we see that y is zero just when
a = x + 1
b = x + k * a = x + kx + k = (k + 1)(x + 1) - 1
So the previous
k * a \leq b \lt (k+1)*a \rightarrow execute instruction k
becomes
(k + 1) * a - 1 = b \rightarrow execute instruction k with y = 0
k * a \leq b \lt (k+1)*a-1 \rightarrow execute instruction k with y positive.

Now that we know if r is zero, we may need to decrement r, and either way we need to jump to a specified address. Decrementing r is the reverse of the increment operations above:
to decrement x
a \rightarrow a-1
b \rightarrow b - k - 1
and to decrement y
a \rightarrow a - 1
b \rightarrow b - k

All we need to explain now is jumps. We will use the fact that the composition of two affine transformations, as defined above, is again an affine transformation, and describe how to jump from j to k. Then each transformation rule defined above can be finished by appending the appropriate jump to the end.
We will see how to change j to k by again expanding the definitions of a and b above. We want to change
a = x + y + 1
b = x + j*a
to
a = x + y + 1
b = x + k*a
So we can simply send
a \rightarrow a
b \rightarrow b + (k - j)a
This completes the transformation and the proof.

Therefore, the linear-inequality affine-transformation automata is Turing complete in dimension 2. This is suggestive of resolving problems about LIATA being difficult, but much as with Collatz and FRACTRAN, a problem being hard in the general case does not imply the problem is hard in any specific case.

2 Likes

Really cool write-up. Thanks star! Since this write-up, we’ve discovered that the LIATA idea exists in literature called Piecewise Affine Functions (PAF) and we’ve added a wiki page with known information about this type of iteration: Piecewise Affine Function - BusyBeaverWiki