it took a while to finally code it, but now its there: the first Slean based miner for Cuckatoo-31 supporting it on 4G and 8G cards. You want to learn how the trimmer works? Then watch my presentation at Grin Amsterdam meetup, starting approx 1:14:20: Grin Amsterdam 2019 - YouTube
You can download it here:
lolMiner 0.8.1 for Windows and Linux
Note: Windows will be updated later today
Sample usage (Linux):
./lolMiner --coin GRIN-AT31 --pool eu.stratum.grin-pool.org --port 3416 --user Lolliedieb --pass rigName
Sample usage (Windows):
lolMiner.exe --coin GRIN-AT31 --pool grin-eu.sparkpool.com --port 6667 --user email@example.com/rigName
- Cycle finding is completely done on GPU - the CPU is barely used (only on Nvidia there is one core used by the CPU due to the OpenCL back end)
- Fidelity should converge to 1 on both solvers. The miner displays fidelity every 5 minutes (configurable) and in its API. Fidelity is calculated for 14 and 42 cycles from startup of the miner.
- Fee is 1% as for all other coins in the miner
- Miner can be configured in the way it displays stats, api and so on. See the manual
Estimated Performance (Stock clocks)
||8 G Code
||4 G Code
|GTX 1060 6G
||0.75 g /s
Minus sign = Will not be used or is not compatible
… = Compatible but value not yet known
Have fun mining - and appreciate reports of stability, hash rates, problems … what ever you want to tell me
I just updated the Linux build to 0.8.1
It mainly did some stratum fixes:
- Support for NiceHash
- Solved a problem with grinmint pool
- reduced number of stale shares submitted
Edit: Windows version now also up to date
I am testing this! So far:
NVIDIA 1060(6GB) - 0.20-0.30 G/s
NVIDIA 1050ti(4GB) - 0.13-0.17 G/s
NVIDIA P6000(24GB) - 0.83-0.90 G/s
AMD RADEON VII(16GB) - 1.03-1.10 G/s
AMD VEGA FE(16GB) - 0.80-0.83 G/s
AMD RX470(8GB) - 0.38-0.40G/s
will edit it soon to add more cards…
Any idea of adding support for 16+GB cards??
As far as Tromp said, it doesn’t scale well to have 2 instances accessing the memory at same time, but have you tried having one instance accessing double the memory? (Since the bottleneck is memory, it might help)
Congratulations on this monumental implementation effort!
Were you able to measure how much time is saved by doing the cycle finding on the GPU rather than on a speedy CPU?
@josevora Yes, I will likely do a 16G variant of the algorithm for Radeon VII and the 16G RX 570.
As you pointed out multiple instances will rather slow it down, so it will use more memory per instance. My estimate - from where I had to make time-memory trade offs - is that one can gain ~30% hash rate by using 16 GByte of GPU memory instead of 8G.
@tromp Thank you very much Well I did not measure the CPU run time directly. I think for a single GPU the actual time on CPU can be counted as 0, because one usually would try doing that in an extra thread and interleave the execution with the GPU already doing new work. That said with many GPUs in a bigger mining rig a weak 2 core CPU may reach its limits, so for that case the computation on GPU is beneficial. The penalty for doing so is approx 3% on the 8G and 2% on the 4G code. That was worth doing it in my opinion
GPU 1: Share accepted (425306 ms)
GPU 1: Share accepted (343796 ms)
GPU 0: Share accepted (328373 ms)
GPU 0: Share accepted (39527 ms)
notice anything weird there?
Yes, thats wired xD
Can you tell me on what pool you see this? It may be that there is a simple explanation for this.
Background: The miner sends the share with “id” = 4+gpuIndex to the pool and expects that this id is also used for the return message to look up when the share was send and to assign the accepted or rejected share to the right GPU in my stats module.
Unfortunately that only works for pools acting so. Yesterday when I did NiceHash support I got back stuff like “GPU 4294967292: share accepted (nan ms)” because the Nicehash pool did send me always “0” as id for accepted shares ^^
So this may be just my grin stratum back end speaking a different dialect then the pool your connected to. But do not worry: the most important is the “accepted” and as long as pool roughly agrees in hash rate with the miner your perfectly fine
OK, I’m using a variant of Grin-Pool.org… and it’s been 6h and I have 2% rejects, but hashrate is 39% higher on the pool! case to say LOL?
14 Cycle Fidelity is 0.99
42 Cycle Fidelity is 1.01
so everything ok…
Well - you got multiple layers of luck between running a graph and whats recorded at the pool - seems many of the solutions were below the pools diff barrier then
Fidelity close to 1 seems healthy
If you like you can pm me the data of your pool (address), then I can run a small test on it to see what it sends and why the miner miner misinterprets it.
Do you know if it is compatible with RX550 4GB ? Interested to know about it.
Yes, on Linux and Win 7 that will likely work (Windows 10 unfortunately not), but I have no idea about the performance. Most likely ~0.1 g/s on the 4G kernel.
Using 1070 Cards with 8GB Which will be more profitable ? Mining with slean Cuckatoo31 or Cuckaroo29 ?
Unfortunately I am no longer able to update the first post here.
But want you to know that lolMiner 0.8.5 was just released including Cucarood-29 support (–coin GRIN-AD29) for the PoW fork tomorrow.
You can find it here:
Supported Cards are AMDs with 4 and 8Gbyte.
Performance is not widely tested yet, but got 2.35 g/s on a 580 4G and 2.8 on a 580 8G. So most Polaris will range from 2.2 to 2.8 I guess.
For Vega I observed that the miner is a little quicker in Rocm then on the normal drivers - currently investigating why.
Happy mining tomorrow after fork