Hi, thanks!
Right - it’s an assumption about which searches are futile to continue, which is false but has two redeeming qualities: (1) it can’t be wrong enough to block a solution entirely, only at fixed depths, (2) statistically, it saves far more time than it wastes by making you re-search machines.
I doubt it, but it was a vast majority at all depths up to 9 (100% for power-of-2 depths).
Pretty much! For state i to be a “sink state” means DFA[i][0] == i && DFA[i][1] == i. (Consequence: if DFA[i][0] is known and DFA[i][1] is under construction, you can’t be sure i is a sink but you might be sure it isn’t.)
The Rust code computes this power-of-2 bound by extracting the high bit of n, tracks the run of non-sink-states starting at i=0, and rejects the DFA (same as if ExtendNFA returned false) if the count exceeds the power-of-2 bound. For fear of off-by-one errors I confirm that “the count” is e.g. 1 if the run is from i=0 to i=0 inclusive.
This same debate arose in HS acceptance discussions. To summarize: this thread and sub-forum is the place to debate that. I think everyone’s trying to set fair and non-dogmatic rules, just with varying ideas about what balance to strike. I’ll say this, though: when it’s practical, it’s very clean for the BB challenge’s ultimate work product to say: “we followed the method described here, deciding 1,234,567 of the remaining machines, and so can you, and nothing more needs to be said about those machines.”
I can’t find the quote right now, but I think even cosmo was open to shifting to a proof-data approach – but for a small machine population.
Drat. I wasn’t really expecting it to work, but you’d already beaten my expectations, so I was hoping.