At a later time, I plan to upload more detailed info.

For now, a recent list of the most stubborn machines to solve is available here: https://cdn.discordapp.com/attachments/1028746861395316776/1034883145071341659/mateon_uni_tnf.txt

I am afraid to report that a search with a time limit of 10,000,000 decides only one of the machines in your list (#13458880) as a bouncer. (But that’s progress, right?)

That’s actually quite a good result, because this list includes results reported by @univerz from an inductive prover.

It’s sort of true (if overly simplistic) that some deciders naturally complement each other whereas others naturally compete. My fallible understanding is that: (a) this bouncer decider’s most natural competitors are such an inductive prover and GitHub - Iijil1/Bruteforce-CTL: An attempt to proof that a turing machine doesn't halt by finding a closed tape language using bruteforce (ironically not very brutish, also explicitly tunable for bouncers), (b) the inductive prover hasn’t had the same level of publication/documentation work done, (c) Bruteforce-CTL needs a whole lot more than 3.5 minutes to make much progress.

Update based on Discord conversations and better searching:

- @univerz’s prover matters only as a benchmark. (“its under continuous construction for breaking BB6 record; but i do not think we can get that code to bbchallenge replicability standards, its too complicated”)
- Bruteforce-CTL probably isn’t needed (still interesting though)
- “Rule-proving simulation acceleration” by @sligocki (thread: Shawn's Holdouts) has the potential to generalize/subsume the “bouncers” decider, based on reported results. I haven’t yet checked its performance, ease of reproduction, and ease of proof. Your decider looks good on those fronts…

Ha! I found another bouncer in there: #347505. You have to increase Steps to 30,000 in the SImulation Parameters to see the cyclic behaviour. It has three partitions, and 104 runs (see bbchallenge/Bouncers at main · TonyGuil/bbchallenge · GitHub for terminology).

I finally found the time to finish my description of the verification process – see the GitHub link above. So now interested parties can write their own verifiers!

Having said that, I don’t know whether this is still relevant. Everybody seems to be working on Finite Automata Reduction these days. Have I become obsolete?

Thank you!

The relevance issue is subjective, speculative, and fluid.

Discord is good for such conversations and has been more active.

To summarize my impressions:

- The active community right now is small; many are busy with non-BB things.
- The most active topic right now is Halting Segment; we want to finish it off while it’s fresh.
- I love finite automata! I just hit a milestone ~yesterday that lets me pause work on the subject. It’s not something everyone’s working on. There’s ample space for you to compete or help. It doesn’t cover/obsolete a Bouncers decider.
- CPS may or may not be the next thing after that to draw community attention – depends on when key proponents free up the time to do the reproduction/verification work. It doesn’t cover/obsolete a Bouncers decider either.
- The most likely thing to cover/obsolete a Bouncers decider is general accelerated simulation / inductive proof. There’s one hedged opinion that it should - but the job of building such a decider to the reproducibility and verifiability standards is tough and not in progress. Undone work can’t obsolete done work. (@sligocki does have real, working code with potential to be reproduced and proven.)

I’ll be taking a look at your new work.

I’m pretty confident guessing that either your direction or Shawn’s (et. al.) must be pursued in order to finish off BB(5).

I must add that you are absolutely not obsolete and that no contributor is or never will !

Really appreciate your work

Note that we have put in place quite a precise process [Debate & Vote] Deciders’ validation process for officially applying deciders and that any decider that ticks all the boxes is automatically applied.

However, the process is quite demanding which is why everything is taking time.

The process that you refer to is unnecessarily restrictive! Clearly, it is enough to validate the Verifier rather than the Decider. How the Decider arrives at its conclusions is irrelevant – as long as its output can be verified, that should be enough.

- For Translated Cyclers, the verification process is trivial. So any machine that has a verifiable certificate should be accepted.
- For Bouncers, the verification process (as documented on GitHub) is more complicated. But it is still simpler than the Decider. Validating my Decider is unnecessary.
- I have an idea for useful verification data for HaltingSegments. I have modified my Halting Segments Decider so that it visits the search nodes in exactly the same order as @Iijil’s go implementation. Now when I arrive at a node that has already been seen according to my broader criterion (see [Decider] Halting Segment - #5 by TonyG) I can output the IndexString of the original matched node into the dvf, along with the index of the search node where it is required. So it is a simple matter to verify that the current position satisfies the IndexString according to my criteria. (Never mind if this explanation is a little incomplete; when I have the code ready I will post a fuller account on Deciders:Halting Segments. My point is that we don’t have to restrict ourselves artificially to validating Deciders, when there are other ways of achieving the confidence level required.)

The process has been discussed at length on the forum and on discord and is currently the one in place.

I 100% agree that there is merit in the verifier approach, while raising the point that it is probably not universal (some techniques may not have any suitable verifier).

My opinion is to move to this more agile governance model (i.e. validating machines on the fly based on verifier data) once we reach a small set of undecided machines, which, given the projected results of current methods that are discussed on discord should be around 10,000 machines.

Finally, note that the restrictiveness and constraints in the current process are thought to ensure maximal trust in validated techniques and do the ground work necessary for publication (reproductions, formal proofs, etc…).

You are more than welcome to express your opinion in the thread and/or discord.

I have discovered a small problem, which means that six of my claimed Bouncers are of uncertain status. The problem is that `Bouncer::CheckTapesEquivalent`

checks that two tape descriptors are equivalent, but only for the specific values of their `RepeaterCount`

array. For the inductive proof to hang together, it should check that the tapes are equivalent for any `RepeaterCount`

values greater than or equal to the current ones; I have amended the code to check for this, and six machines fail the amended test. I am pondering the best way to fix this. I suspect that most of these six are in fact Bouncers, but not all of them.

The 3543 Bouncers that I announced in the Finite Automata Reduction holdouts are unaffected by this bug.

The problem is solved. All the uncertain cases turned out to be Bouncers afer all. I have posted the fix to github, and updated the Readme file to better explain the verification process.

**Note:** The format of the Decider Verification File has changed slightly: `nRuns`

is now `ushort`

, and `RepeaterCount`

has been removed from `RunDescriptor`

. If anybody out there is verifying, I apologise for the inconvenience.

Re: validating Decider vs. Verifier. I think this could be a bit of a terminology issue. I completely agree with you @TonyG that only the Verifier needs to be validated. Effectively, I would call your Verifier the “True Decider” in your system while the thing you’re calling a decider is sort of a heuristic detail that helps you search for a proof certificate. I’ve been really interested in this type of split for a long time, I’m excited to see that you’ve implemented it. I’m reading up now.

FWIW, here’s a rebased index showing what of today’s undecided index is unsolved by “Bouncers” with the 1M limit.

undecided_index_filtered_by_Bouncers_1M.umf (342.7 KB)

Sadly, the larger search now outdoes the fast 100K search by only 9 beavers. Still worthwhile, and reordering the decider runs could slash the cost a great deal.

On Discord, it was noted that for a few bouncers where Tony’s approach has a huge advantage over Shawn’s otherwise more general approach: Shawn’s code uses fixed-size blocks, so a bouncer with (real example) 4-, 5-, and 7-periodic runs will escape detection.

Funny zoological point: The class solved by this decider could be called the “quadratic” bouncers (in analogy with the “exponential” counters). This is by far the main and largest group of “bouncing” machines. However, there are others which are surely bouncing, but are keeping themselves busy to a higher degree along the way. (Sorry, didn’t have a simple 5-state example handy.) Perhaps the zoology should distinguish these cases.

@sligocki discovered some bouncers which are decidable but only after more than doubling the 1M step limit, which is just amazing.

```
[justinb@beavers Bouncers]$ ./DecideBouncers -X2087834 -T4000000 -S20000
100% 1 1
Decided 1 out of 1
Elapsed time 0.045
0 Unilateral
0 Bilateral
1 Translated
0 Double
1 Multiple
0 Partitioned
0 Probable Bells
2087834: 28 runs
2087834: RepeaterPeriod 1112
[justinb@beavers Bouncers]$ ./DecideBouncers -X2136479 -T4000000 -S20000
100% 1 1
Decided 1 out of 1
Elapsed time 0.032
0 Unilateral
0 Bilateral
1 Translated
0 Double
1 Multiple
0 Partitioned
0 Probable Bells
2136479: 28 runs
2136479: RepeaterPeriod 928
```

@savask (Discord) announced a new Bouncers implementation.

This one is written in Haskell, a high-level functional programming language, and able to express the method and comments/examples in under 500 lines.

(This doesn’t include the seed DB and index file integration. It uses the TM text format.)

The linear/quadratic rule for detecting Cycle boundaries is the same, as is the high-level matching strategy. Some finer implementation details differ, and it doesn’t try to be DVF-compatible with Tony’s program. Savask reports that it solves the same machine population, though, so it would corroborate the results by the same standards as most bbchallenge deciders.

Exciting stuff.

I have pushed my new Bouncer code + docs to bbchallenge/Bouncers at main · TonyGuil/bbchallenge · GitHub. The verification format is now as simple as I think it can possibly be for a mechanical verifier. Let me know your thoughts, everybody!

An interesting parsing problem arises from @savask’s bouncers code.

We try to match the format `s(n) = u1 w1^n u2 w2^n ... uk wk^n S>`

being given `s(0), s(1), s(2)`

. Example from @savask:

```
For example, if we have
s0 = 101 S>
s1 = 110011 S>
s2 = 110100111 S>
then
s(n) = 1 10^n 0 1^n 1 S>
and the tape formula will be
1 (10) 0 (1) 1 S>.
```

This is currently solved using dynamic programming, but we explore if there is a more efficient algo.

First, idea was the following greedy algorithm by @savask:

if all strings start with one symbol, throw it away. If not, try finding a longest prefix w of s1 such that w w is a prefix of s2, and throw those away.

Unfortunately, this fails on examples like the following:

```
s0 = 1111111011
s1 = 11110111101111011111111011
s2 = 111101111101111011110111101111011111111011
Correct parsing = 111 (101111) (0111101111) 1111011
```

@savask compiled a list of instances of s0, s1, s2 and correct parsing in the following text files as a benchmark to try algos on:

examples_hard.txt (16.1 KB)

examples.txt (104.8 KB)

I wrote another implementation of savask’s algorithm, in Rust. Like all my other deciders, it generates certificates that then get checked by a verifier proven correct in Coq. Thus, the results are almost guaranteed correct. This should brings us closer to a Bouncers decider that meets the verifiability & reproducibility standards – I think only the section of the decider-correctness document is missing?

When ran on the 32632 undecided machines, it decides 29799 machines in about 10 seconds. This is the same 29799 as Tony’s decider, according to @UncombedCoconut’s data. If I am understanding savask’s comments correctly, this also agrees with their implementation – since it’s slower by a few orders of magnitude, and requires a different I/O format, I didn’t bother actually running it to confirm.

The limits I need to get these results are

```
const SPACE_LIMIT: usize = 1024;
const TIME_LIMIT: u32 = 250000;
```

Notably, this is less than Tony’s decider – reducing the 1M step limit to 250k prevents it from deciding one machine.

This is very impressive work, you are really pushing bbchallenge on the tracks of automatic proofs at a faster rate than what I could have imagined in my dreams!!

I am going to read your code in depth as well as starting to write about bouncers in parallel.

Thank you!

Earlier in May @Iijil also proposed an implementation of bouncers: GitHub - Iijil1/Bouncers: A bouncer decider. Proving that some turing machines don't halt.