Request for all miner devs; use Fidelity

I have been heavily promoting the use of graphs per second as the preferred measure of performance, but that comes with the assumption that miners have negligible solution loss.

That is, the probability of a false negative, i.e. not finding a 42-cycle that exists in the graph, should be well under 1%.

Under that assumption, the graph rate is a good proxy for the solution rate, which is ultimately what really matters. The latter is not only harder to reliably measure but also less convenient in use as they’re often less than 0.1

Now that many other closed source miners are popping up, and claiming significant improvements in graph rate, it’s becoming more important to know how well the assumption holds. So I propose that the miner software provides a fidelity measure, defined as

fidelity = (42-cycles_found / graphs_searched) * 42

For grin-miner, fidelity will be approximately 1.
The number of decimals shown should probably not exceed 3.

grin-miner already shows Solutions found, but not Graphs searched.
I hope to see fidelity implemented within a day or two,
and look forward to other miners copying this feature.

There actually is a tool in my cuckoo repo that can report fidelities from the solver output. In a copy of my repo, or under grin-miner/cuckoo-miner/src/cuckoo_sys/plugins/cuckoo you can try for instance

cd src/cuckaroo
make mean29x4
./mean29x4 -r 1000 -t 2 | perl …/perl/
2 497 0.994
4 246 0.984
6 178 1.068
8 101 0.808
10 91 0.91
12 68 0.816
14 62 0.868
16 49 0.784
18 55 0.99
20 60 1.2
22 52 1.144
24 54 1.296
26 32 0.832
28 43 1.204
30 25 0.75
32 33 1.056
34 26 0.884
36 32 1.152
38 27 1.026
40 28 1.12
42 29 1.218

and see fidelities at different cycle lengths in the last column.

feedback / suggestions for improvement are welcome…


The number of decimals shown should probably not exceed 3.


What if something isn’t even close to 1% if there’s a 99% error rate solution with 1001x speed up, the detail would matter

My grin-miner.log has lines like “… Graphs per second: 1.713 - Total Attempts: 97394”

Isn’t Total Attempts the number of graphs searched?

Yes, but since the recent filtering update, what it reports as “Solutions found” is actually not solutions found, but cyclehash shares found, and the meaning of that depends on what the pool specified as cyclehash fraction, which could change from job to job.
I filed an issue about this and hope to see actual Solutions found restored.

I have added global average fidelity to miner, this assumes that all solutions are actually valid. The number itself can fluctuate a lot so there is 1000 solution warm-up time. With bad luck, it can still be 0.98 after 10k solutions and only recover slowly.


1 Like

If I understand this correctly (and I probably don’t!), fidelity should be ideally as high as possible, no? While this seems impossible since the chance of finding an n-cycle is supposed to be 1/n, in the long run we’d ideally like our fidelity to be as close to 1 as possible, and if our fidelity is somehow >>>> 1 then either we have a bug or we’ve broken AT31+.

Is this a correct understanding?

in the above definition, one expects the fidelity to converge when searching more and more graphs. in the extreme case one should search all 2^256 possible graphs. which of course is only a theoretical possibility.
so in practice, fidelity must be empirically approximated, and one should only show as much accuracy as is warranted by the sample size.

I would expect that searching 10000 graphs would give about 2 digits of accuracy.

That’s assuming all solutions are valid. Better (but more time consuming) would be to have a list of known good solutions in a large range of nonces.

Just wondering what you think about the idea to involve the smaller circle length to estimate the fidelity in the start up phase of the miner.
I mean if an implementation suffers from low fidelity (due to running out of memory) this will also effect other cycle length, but measuring the number of found solutions for shorter length over the number of graphs run may converge much quicker and so could be an option when the software just started.

Any opinion?

Yes, this would be an excellent option to support in miners.
There is a slight tradeoff there, since shorter cycles tend to have higher fidelities, so while we want take some shorter lengths into account to reduce variance, we don’t want to include much shorter lengths that would move the expectation too much. I propose to do as in

which obtains a fidelity over the 10 cycle lengths from 24 up to 42.
For clarity let’s call this fidelity_24_42.
If miners could provide this metric that would be very helpful indeed.