Choice of ASIC Resistant PoW for GPU miners

The logic in this dual PoW scheme strikes me as: since the mining ASIC market is highly centralized, we want to have a healthy GPU mining community at launch, while allowing for ASIC manufacturers to diversify.

Is that correct?

If that is so, I don’t see how the market for GPUs capable of mining cuckaroo is any more healthy or diversified than the mining ASIC market right now. There are exactly 2 companies building GPUs with enough RAM to run cuckaroo, and exactly one that is able to run the currently existing stable mining software, which was written in the vendor lock-in CUDA API. A case can be made that there are many more nVidia GPUs with 7GB+ RAM deployed than Cuckoo Cycle ASICS, but there are exponentially more lower RAM GPU deployed that would benefit from the ability to lean mine.

We currently have miners for both NVIDIA and AMD using only 5.5 GB, with 3rd party 4 GB miners possibly materializing before long.

I’m focusing on the implied claim that xmr doesn’t have mining attacks, when in fact has had an ongoing one for years; its one of the reasons I consider the project a failure and been avoiding it.

I may be stretching the term a bit, but its still an undesirable behavior that allowed by the flaws of the protocol

Again, this is not what I referred to. Grin will not have a botnet problem, since it’s not economically CPU-mineable.

Attacking with rented hash over the short term is orders of magnitudes cheaper and easier than acquiring equipment, securing real estate, power etc.

I fully agree with that statement. But what makes you think this won’t be possible with cuckaroo29? Mining will happen from day 1 with 4GB GPUs. Grin has already “lost” the >6GB memory requirement claim and as officially stated by @tromp it will be also mined on 4GB GPUs.

ASICs FTW. Do whatever is necessary to keep hash rent market out. The process is fuzzy though and not perfect.

I agree. When having a static PoW algorithm, I agree, ASICs best secure the long term outcome of a cryptocurrency. However, given the ASIC market’s maturity this is currently far from the truth.

So Grin approaches the goal perfectly:
Start out with GPU mining and distribute the supply as good as possible in the beginning, just like Bitcoin did and then over time let ASICs take over and provide stronger proof of work that commodity hardware will never have an attack vector for.

However, I agree with @lvella and disagree with @tromp with regards to only allowing “high end” GPUs to participate in the first two years.

As stated by Grin:

Over the past 6 months, it has become apparent to the team that:

  • The availability of an ASIC for Cuckoo Cycle at launch is a distinct possibility.
  • The current ASIC market is centralized, especially when it comes to recent cryptocurrency releases. The development of a competitive ASIC market takes time.
  • A healthy and grassroot GPU mining community at launch is highly desirable.

In the last point they say they want to have a healthy and decentralized GPU mining community. By favouring high VRAM GPUs only, they are accomodating megafarms and handicapping Average Joe miners, smaller mining farms and people with less capable GPU without stating a good reason.
The “good reason” was “gamers will constitute the mining community” (which is false anyways) but now even with that reason thrown out of the window, there really isn’t a good reason to not allow any GPU to participate made in the last 4-5 years (implying none was made with less than 1GB). If anything, they are severly hurting decentralization and fairness aka an even playing field.

It’s very important to note that this is not 2010/2011 anymore. Infrastructure is now there (no matter in what class segment) and ready. So accomodate the best possible to the current situation and do not prefer one segment over another. Allowing all segments to participate will create the fairest coin distribution and hashpower distribution for the first 2 years.

How’s this for “gamer mining”?
http://ctolab.ethosdistro.com/
https://web.archive.org/web/20190117094725/http://ctolab.ethosdistro.com/

1500 8x gp102 rigs, aka exlusive mining-only Nvidia GPUs that do not even have a display output (1080/1080 ti equivalent).
Very likely belonging to ex(?) partner of the Mineority group (have pinpointed it to Sweden).
Currently 22% of the network.

Just open it up already to everyone. That 8GB+ requirement has long been lost. We are soon going to be looking at lossless 4GB mining and soon (lossy) 3GB thereafter but in secret.

Instead just open it up instead of letting people keep their low VRAM GPU miners secret at the disadvantage of decentralization.

4GB, 3GB, 2GB, 1GB GPUs can all mine C31 using lean mining…

So can an Intel Celeron or your Snapdragon in your Android phone. Would you do it?
My point is, 4GB, 3GB, 2GB, 1GB are all coming and being implemented in secret. You just don’t know it yet and the majority of people probably won’t. Get my point?
Even playing field.

There is nothing “even” about mining, it’s a zero sum game. If you enable 1GB cards, then the people who can invest more capital are still going to get the most out of it. Apart from that, you open up the network to nicehash type of attacks. Incentivizing the use of specialized hardware that is unique to the Grin network and cannot be easily rented in the cloud is the best course of action. IMO

I disagree. If you allow more cards to mine, it will distribute the hashrate better. Right now mining is effectively limited to those with 8GB+ cards.

Your point about nicehash / cloud GPU attacks is true, but that’s an argument for ASIC’s. At this stage I don’t think limiting the GPUs that can mine makes any difference in terms of a nicehash attack.

1 Like

I fail to see why this is the case. Hobbyists with a couple of low-end GPUs laying around are not going to profit much, are they? It only opens up the door to more farms. If that’s the case, then what’s the point?

Isn’t having more farms better than having only a few farms?

The profitability of home miners vs large miners is a separate issue. There’s no beating economies of scale. Home miners who think they’re profitable are invariably ignoring their labor costs, or their power costs, or the depreciation of hardware, or some factor like that. They’ll think they’re profitable even though they’re losing money if they do the accounting accurately. Hobbyists should mine for the fun of it and the love of the coin, not to make money.

If you enable 1GB cards, then the people who can invest more capital are still going to get the most out of it.

So what does exluding low end GPU owners exactly help? You’re just exluding them and it’s unfair. It’s not like they will take away all your profits either, since MOST (95%+ of farms and people) run 4GB GPUs or more, as has been apparent with the massive GPU farms from Sweden connecting to Grin (Hint: Genesis Mining, Alpine Mining and others).
The only thing it will allow is for them to participate and have their fun with Grin too and helps decentralization and network growth a bit.

Please tell me one objective and reasonably thought out argument that’s going to convince me and I’ll be happy. That’s the only way to convince me and so far I could not see and could not think of one.

Apart from that, you open up the network to nicehash type of attacks.

This is just plain wrong. As previously stated by @tromp and reiterated by hard- & software/mining expert @timolson 4GB lossless GPU mining is possible and that’s where the big renting hash is coming from.

Lower end GPUs mostly also have limited compute capabilities (for example a RX 550 2GB GPU has 8 CUs, in contrast, a RX 570 4GB has 32 CUs) so there is no reason to fear those GPUs even if you’re worried about them taking away high end GPU profits (if that’s your argument), because the big chunk is done at 4GB and that’s already here.

I reiterate: Enable an even playing field. 4GB mining is already here and it’s not going anywhere. It shines a good light on Grin, helps decentralization and distributes Grin more fairly, which is the goal of the Grin developers, otherwise they wouldn’t have stated so in a front-page post.

It definitely is when most of them are rational/honest. But in a scene where you have various interests going in different directions, allowing any type of hardware to mine can work in both ways: it will invite more honest players but it also removes any barrier for dishonest players. People who are genuinely interested in securing the network are going to take the extra step of [insert requirement]. In that sense, you can undestand that being conservative about it is actually a smart choice.

The most powerful GPUs come with 8GB+. You don’t get 1080Ti core with 1GB. Actually using that memory means that even if smaller footprint algorithm surfaces, it won’t dip far below 4GB without losing speed. Roo makes sure you can’t lean mine, 6 months hard forks are good, but having to design GPU-like memory buses is an extra level of safety against stealth ASICs that would ruin the day for everybody.

It definitely is when most of them are rational/honest. But in a scene where you have various interests going in different directions, allowing any type of hardware to mine can work in both ways: it will invite more honest players but it also removes any barrier for dishonest players.

Why do you assume there are not “dishonest” players with 4GB+ GPUs? I say dishonest miners are distributed evenly, by allowing everyone in you’re just making it harder for the already dishonest participants or potential “thinking-about-to-become-dishonest participants”.
i.e. you’re strengthening the network

People who are genuinely interested in securing the network are going to take the extra step of [insert requirement].

Do you just assume that people have endless money to just quickly upgrade to [insert requirement] to mine Grin? Hint: Multi dozen to hundred-million dollar farms in Iceland, Sweden and Canada do. Ignoring the environmental impact of that thinking here completely.

Let me put it another way:
I have perfect devices for mining Grin, if it just wasn’t for the artificially inflated graph size that has no justification of existence as it stands today?
You could also argue that it is a waste that these GPUs will not secure your Grin network even though you could enable them to, instead they will mine a coin of a competitor of yours.


(This is the farm I’m running, if it helps you visualize better, you’re argumenting for these GPUs to not secure the network with an artificially upheld barrier)

This is your usual (in my case slightly bigger than usual) 1 man operation.

Edit:

The most powerful GPUs come with 8GB+. You don’t get 1080Ti core with 1GB. Actually using that memory means that even if smaller footprint algorithm surfaces, it won’t dip far below 4GB without losing speed. Roo makes sure you can’t lean mine, 6 months hard forks are good, but having to design GPU-like memory buses is an extra level of safety against stealth ASICs that would ruin the day for everybody.

It has already been stated that a 1GB requirement would not increase the efficiency, nor cost-effectiveness of a potential ASIC.

Kind Regards
A honest miner

There are no Android miners AFAIK and a Celeron would take more than half the block time per attempt. But if your limited memory GPU card can run Cuckatoo31 in less than half the blocktime, then you should seriously consider mining with them as C31 is VERY profitable now with Network Difficulty under 320M…

Does ocl_cuckatoo C31 support 1 & 2gb GPUs ?

??? It is a ASIC friendly Cuckoo algo.

That was not my point. You could theoretically also try to find graphs by pen & a lot of paper and a lot of dedication & imagination. Would you see yourself economically doing it?

My point is: there’s no technical argument against including lower-end GPUs into the Grin ecosystem. So there is only a governance/polticial argument and I’m asking you: What is it?

I tried cuckatoo31 with my 2GB RX 550 with the official Grin miner and it seems to be working at 0.04 GPS which isn’t that bad. That implementation is completely unoptimized correct? I might try that until ASICs come along but I don’t think your governance for low end GPUs is fair at all.

yes with lean mining

Do you have some ideas to reduce sram size for C31 except using PART_BITS? It seems that the minimal sram is about 300MB for C31 by PART_BITS. This size sram needs 10nm or 7nm node process and has some risks to integrate so many sram.