Here is the full proof that a LIATA can simulate a Minsky Machine.
Motivated by the Beaver Math Olympiad #1 (BMO1), we define a linear-inequality affine-transformation automaton (LIATA) as a starting vector \vec x = (x_1, x_2, \ldots , x_n) in \mathbb{Z}^n and a list of m transformation rules. A transformation rule is a list of inequalities
a_1 * x_1 + \ldots + a_n * x_n \,\{\lt,\leq,=\}\, c for constants a_i, c and an affine transformation, which may be specified abstractly as an n\times n matrix M and a length n vector \vec v providing a constant offset. The idea of a transformation rule is that whenever \vec x satisfies all of the inequalities, we apply the affine transformation \vec x \rightarrow M\vec x + \vec v. Accordingly, we may specify that the linear inequalities define non-intersecting regions of \mathbb{Z}^n.
Then the automaton is evaluated by starting with \vec x and evaluating the applicable transformation rule until none apply, at which point the automaton has halted.
A concrete example of such an automaton is the one from BMO1, which is specified by the following data:
\vec x = (1, 2). We will refer to this as (a, b) for ease of notation.
if a > b then (a, b) \rightarrow (a - b, 4b + 2)
if a < b then (a, b) \rightarrow (2a+1, b - a)
You can see that the automaton halts if a = b. It is currently unknown whether the automaton above, which corresponds to a 6 state turing machine, halts.
A different very simple model of computation is known as a Minsky Machine, which is a type of “counter machine”. A Minsky Machine has r registers which store an unbounded positive integer and k instructions, numbered from 1 to k. Each instruction is either:
- i: Increment (r_i, k_i), which means add 1 to r_i, then go to instruction k_i.
- i: JumpNonzeroDecrement (r_i, j_i, k_i), which means check if r_i is zero, if so, go to instruction j_i, else decrement r_i and go to instruction k_i.
- i: Halt, which means the machine halts.
(note that the i in the above is the instruction number)
It is known that Minsky Machines are Turing complete, even when the number of counters is restricted to 2. The simplicity of the model makes them a useful way to prove new computational models are Turing complete. In particular we show that the linear-inequality affine-transformation automata are Turing complete, even when the dimension of \vec x is 2, by describing how to turn a given 2-counter Minsky Machine into a dimension 2 LIATA.
The key idea is to represent the two counters of the Minsky Machine as the two dimensions of the LIATA. In particular, the “current memory” of a Minsky Machine is (x, y, k) for the two counters x and y and the next instruction to be executed k. So k \leq K is a finite number, while x and y are unbounded. The current memory of a 2D LAITA is (a, b). This (a, b) will represent any (x, y, k) via the following transformations:
a = x + y + 1
b = x + k*a
Now we need to describe how to translate each instruction of the Minsky Machine into one or more transformation rules, and prove the evolution of the LAITA is the same as the Minsky Machine under the transition rules.
For each step of the LIATA, we first must determine what k is, so we know which MM instruction to execute. This is always possible, as we constructed a st a > x, so that b would be in [k*a, (k+1)*a). So our in progress description of our LIATA can look like:
k * a \leq b \lt (k+1)*a \rightarrow execute instruction k.
It is easily seen that these inequalities are mutually exclusive.
If instruction k is Increment$(r, j)$, we must first increment r and then jump to instruction j. We’ll describe how to jump to an arbitrary instruction later. Incrementing r involves adding appropriate values to a and b, depending on what the coefficient of r in the above definition of a and b in terms of x and y. For example, to increment x:
a \rightarrow a+1
b \rightarrow b + k + 1
We are allowed to use k here, because each inequality only applies to one specific value of k. Similarly, to increment y:
a \rightarrow a + 1
b \rightarrow b + k
If instruction k is Halt, then the translation is similarly straightforward: include no transition rule at all.
For JumpNonzeroDecrement$(r, j, k)$, owing to the complexity of the instruction, the translation is somewhat more complex. In particular, we will need to refine the inequality given above to determine if r is nonzero. If r is x, then again expanding the definitions of (a, b), x is zero just when
a = y+1
b = k * a = k * (y + 1)
So the previous
k * a \leq b \lt (k+1)*a \rightarrow execute instruction k
becomes
k * a = b \rightarrow execute instruction k with x = 0
k * a \lt b \lt (k+1)*a \rightarrow execute instruction k with x positive.
Similarly, if r is y, we see that y is zero just when
a = x + 1
b = x + k * a = x + kx + k = (k + 1)(x + 1) - 1
So the previous
k * a \leq b \lt (k+1)*a \rightarrow execute instruction k
becomes
(k + 1) * a - 1 = b \rightarrow execute instruction k with y = 0
k * a \leq b \lt (k+1)*a-1 \rightarrow execute instruction k with y positive.
Now that we know if r is zero, we may need to decrement r, and either way we need to jump to a specified address. Decrementing r is the reverse of the increment operations above:
to decrement x
a \rightarrow a-1
b \rightarrow b - k - 1
and to decrement y
a \rightarrow a - 1
b \rightarrow b - k
All we need to explain now is jumps. We will use the fact that the composition of two affine transformations, as defined above, is again an affine transformation, and describe how to jump from j to k. Then each transformation rule defined above can be finished by appending the appropriate jump to the end.
We will see how to change j to k by again expanding the definitions of a and b above. We want to change
a = x + y + 1
b = x + j*a
to
a = x + y + 1
b = x + k*a
So we can simply send
a \rightarrow a
b \rightarrow b + (k - j)a
This completes the transformation and the proof.
Therefore, the linear-inequality affine-transformation automata is Turing complete in dimension 2. This is suggestive of resolving problems about LIATA being difficult, but much as with Collatz and FRACTRAN, a problem being hard in the general case does not imply the problem is hard in any specific case.