How to develop a C32 miner?

anyone have interest to develop a C32 miner?

thanks

Are you talking about ASICs?

Assuming you’re asking about GPU mining: Efficient (mean) C32 miner would need at least 22GB of RAM (twice what C31 needs). There are very few cards with that much memory. And even if you had it, there is currently no incentive to mine C32, because mining two C31 graphs in parallel on such a card is likely more efficient than mining a C32 graph.

This may change once C31 rewards will start to decrease (I believe in Jan 2020) but the Grin community believes that ASIC miners for C32 will by available by then. So why would you develop such a GPU miner?

EDIT: I misunderstood.

According to @tromp, mining C32 should up to 4x more efficient than mining C31. Therefore, if you had 2 cards mining C31 your output would be 2xC31, but if you had both of them (somehow) mining C32 your output should be equivalent to 8xC31.

Huh, where did you get that from?
C32 is at best only slightly more efficient than mining C31 (because weight(31) = 2 * 32/31 * weight(30))

I thought I saw you type that in the Gitter chatroom at one point, but clearly I misunderstood. Thanks for the swift response and correction.

In the scenario posted above, does this mean that mining 2x C31 on the same card would be more profitable than mining C32 alone given the RAM requirements for C32? I frankly didn’t even realize one could use a 22-24GB card to mine C31 twice-over due to VRAM limitations.

We cannot be sure until we try, but I expect running two instances is worse as it causes more memory contention than a single (bigger or same size) instance.

The reason why I think that running 2xC31 may be better (in the first year) is because C32 seems technically challenging. I don’t have hardware nor software to try it though so this is all just speculation. But there are two issues I see:

  1. We’re approaching 32-bit number limits. The current implementation of cuck(at)oo seems to require 64-bit storage for C32: https://github.com/tromp/cuckoo/blob/6dba86d2bed50757c7d7eb012ff0b1191378620b/src/cuckatoo/cuckatoo.h#L34
    It should be possible to use 32-bit numbers for C32, but if not then the memory requirement doubles. Assuming this can be fixed for C32 then C33 will be even bigger challenge :slight_smile:

  2. The way trimming algorithm works now is it does two rounds of bucket sort and one round of “lean” trimming. When you double the number of edges you have 4 bad options:

  • do three rounds of bucket sort
  • use more cache in the bucket sort phase
  • use more cache in the “lean” phase
  • come up with a different approach to do all of this

So for C32 we’ll likely see runtime to more than double on GPU. How bad it will be in practice I don’t know.

Yes, it will be fixed for C32, and C33 will be too much of a headache that I probably won’t bother. The ASICs will have totally taken over so there will be no interest in a C33 GPU miner anyway.

Only the seeding does two rounds of 6 bits; later rounds directly do 12 bits. This won’t change for C32.
The lean within-bucket mining will need to increment PART_BITS; so RTX will have it at 1, and GTX will have it at 2.

1 Like

Wait, what? 2x C31 is possible? With grin-miner?? How?! I have an RTX Titan and would like to try it out. I also asked here… Two instances of grin-miner for one GPU?

I have vintage Gridseed Miners I would like to turn back on instead of just sitting around so yes, please, sounds like fun.