Best GPU for mining Grin

what’s the reason why Tesla V100 with 32GB memory even slow than 2080Ti ? thanks!

Extra memory doesn’t help much on C31; newer generation Turing architecture helps more. Anyway, that was my guess; let’s wait to see performance numbers.

A fully optimized Cuckoo Cycle miner is compute-bound on GPU’s, not memory bound.

That is not what it looks like on cuckaroo_cuda_29;
a GPU typically spends 37 ms on computing and bucketing siphashes,
and then another 188 ms edge trimming and cycle chasing.
So even ignoring the initial bucketing, siphash computation is only about 16% of runtime.

Are you suggesting that you optimized the 188 ms down to under 37 ms, for a 3x or more total speedup?

You mentioned in another thread, in response to a question about the complexity of the cuckoo cycle PoW, compared to standard cryptographic hashes, that graph theory is a well-studied field of mathematics. Indeed. But I can’t find (probably because I don’t know exactly where to look) any strong mathematical argument that the current cuckatoo/cuckaroo implementation cannot be dramatically improved upon.

After all, the edge trimming phase where most time is now spent wasn’t part of your initial proposal.

What I am wondering is: what are the strongest arguments we have that edge trimming in its current implementation is close to optimal?

1 Like

The observation that the lean miner only uses 2 bits of memory, 2 bit-writes, and 2 bit-reads, to verify that two connecting edges must be kept around. I cannot imagine using any less resources.

2 Likes

Thank you!

And what about the mean miner? Isn’t that the one used on GPU?

The mean miner is just our best attempt at dealing with the lack of SRAM on GPUs. They bucket-sort the edges so that we can lean mine within each bucket using the limited SRAM in each threadblock.

1 Like

My miner hasn’t done a single hash yet.
If you look at the theoretical capabilities of a 1080 compared to the requirements of siphash, it has 1-2 orders of magnitude more memory bandwidth than needed to keep up with its siphash computation (mean miner). See my next post in the other thread about increasing graph size…


AMD just release 7nm Radeon VII with 16GB VRAM and it launches for $699 this February.

1 Like

I want to buy a gtx 1080 but does it Mather what brand and what version? I would buy the gigabyte gtx 1080 g1 gaming. This should give me 3.7 gps?
Please advise :slight_smile:

getting about 4gps with this 2070
https://www.canadacomputers.com/product_info.php?cPath=569_43_1200_557_559&item_id=129150

likely more energy efficient too!

Maybe, but more expensive 2. I just wanted to know if the brand and version make a big difference. Or is the performance for any gtx 1080 more ore less the same?

i think the pricing is pretty similar, around here at least, there will be differences in speed from manufacturer to manufacturer , also there may be low end 1080’s , mid level, and high end cards offered all by the same manufacturer. Best read the specs, and compare.

here is the only chart i know of at the moment

Can someone comfirm if gtx 1060 6gb is able to mine grin? I thought that mining grin needed 8gb minimum

See GPU mean memory reductions

Yes g1 gaming is a great card. The only problem you can have is a fan failure, but fans are easy to replace . You will get 3.5 gps. Now sure about 3.7

Just don’t buy cards with:

  • a blower style fans (like the founders edition) they are noisy as hell.
    -short versions with 1-2 fans instead of 3. They are fine for gaming but with mining thermals are more important

gigabyte g1 gaming is a great card I have many - they work great

thx for that :slight_smile: but i am not able to locate the cuda mean plugin for the 5.5 cards…
in my folder : grin-miner/target/release/plugins i only find cpu plugins… where
can i locate the cuda plugins?

I did a short test today with my GTX 1080Ti (stock clocks) and got 4.5 GPS @ 250Watts