Proof of work update

Great, I’d recommend starting sooner rather than later - you might be surprised at what you find. (If they aren’t interested in a small risky new coin, then why would they build a private ASIC operation)? No need for a reply, I think this point about focussing on ASICs is clear now. All the best.

Is there reasoning behind why you chose 2 years as the time span to achieve competitive mining scene? Bitcoin had 9 years already and still doesn’t have a competitive mining scene.

Maybe it might be better to overestimate by a lot then to get it wrong.

2 Likes

This seems rather arbitrary to me.
Why 2 years?
Is there any proof that after 2 years ASIC/hashpower centralization will be less of an issue?
Bitcoin did not start out with ASICs yet here we are today.
I’ve been following development of this project and I think this budding community really cares about decentralization. A look at siacoin throws the ASICs bring value argument out.
I think decentralization will be as much of an issue in 2 years as you fear it will be at launch.
I think it is worth looking into permanent measures to discourage decentralization such as randomjs proof of work (https://github.com/tevador/RandomJS).
We can already see the effects of centralization of hashpower and ASIC manufacturing capabilities. If this is an issue now, it will be an issue then. Just something to consider.

I hear the reasons not to do this?

Cryptonight(V2) is not intrinsically very ASIC resistant. It’s mostly tweaks to invalidate their current PoW and potential ASICs. The tweaks themselves make it only very marginally more ASIC resistant.

re: RandomJS, the answer is in the first sentence: experimental. I’d have some serious reserves on its security. PoW algos are hard.

1 Like

I feel like only catering to miners with higher GB cards just cut out a lot of the grass roots miners who were excited to be a part of the network.

1 Like

These are the practical effects of implementing the above:

  1. Only GPU miners will be able to mine for the first 2 years, while the dual equihash + cuckoo cycle algorithms are in effect.
  2. After 2 years, the Grin team is assuming a decentralized ASIC marketplace. At that point, Grin switches over to cuckoo cycle only, which is very efficient for ASICs. Everyone will mine with ASICs, but this is not a concern because at that point, the ASIC market is “mature” and will be decentralized, allowing small miners to continue to mine grin.
  3. Assumption: the Grin team will extend the 2-year period as long as needed for the ASIC market to be “mature”.
  4. For GPU mining at start of mainnet, it makes sense to use 1080ti w/ 8GB memory

^ Can someone confirm that my understanding of the effects is correct?

1 Like

ASIC miners will be able to mine but the dual pow will benefit GPU miners at start and gradually shift towards enabling ASIC mining. Also, ideally, the community will help fund the design of an open ASIC during that time.

1 Like

Given that grin is primed to grow over many decades, shouldn’t the switch to cuckoo take more than two years to allow wider distribution, or is it too difficult to stave off ASICs longer than that?

Single digit supply inflation doesn’t even start until 11+ years out. After two years inflation will be ~50%… that means by year 5 it’s possible that ASIC companies will have mined HALF of all coins in existence, an economic vulnerability. Especially given in 30 years when supply inflation is down below 4%… ~50,000,000 coins out of ~650,000,000 (25 years out or so) is almost 10%. That could mean nearly 1 out of every 10 grin possibly owned by a group of year 3-4 ASIC conglomerates? I guess I am not optimistic that CCCycle ASIC will be cheap and public in only 2 years.

It’s also worth taking into account that it took almost 10 years for bitcoin to become as familiar to the mainstream as it is (and it is still not well understood by most), while a post-bitcoin cryptocurrency should proliferate faster than bitcoin did, I think it will take more than 2 years for the word to spread sufficiently to be fair to then introduce an ASIC. Introducing ASICs before word of mouth has sufficiently spread is akin (in an extremely conservative sense) to a premine. We want the widest variety of people to know about grin, and know that an ASIC is coming beforehand, not just the small subset of people who find out early on. We won’t have to worry about marketing/advertising nearly as much if we allow more time for cheap/free word of mouth to spread. That means less money wasted that could go to core development, and an open source design with years of awareness and lead time means more time to raise money for it. Rushing is expensive, slow and steady is efficient. Again, with a timeframe as wide as grin supply inflation assumes, is two years enough to get the world on board with a currency they might be using in 100 years? A level playing field should be level in time as well as space.

If possible it seems like 5 years should be the bare minimum to continue pow tweak forks. And as @kargakis said, that gives us more time for ASIC manufacturing costs to come down and an open source ASIC to be developed. It’s only in the last few years that ASIC competition has started to arise, 5+ years is going to put them into many more hands than possible in just two. I also think an open source asic is an extremely worthwhile (and possibly revolutionary) idea that we don’t want to rush.

Vorick also made it sound like 6mo is enough time for an extremely dedicated (and secretive) ASIC manufacturer to get in and mine, it’s more intensive but might be worth considering shorter time frames for POW tweak forks. If we did it every 4mo we could line it up with the equinoxes/solstices which is witchy. I think given the enthusiast nature of grin people will be able to update their machines on time.

is it too difficult to stave off ASICs longer than that?

I fear it’s going to be VERY hard to stave off ASICs for as long as 2 years. We may start seeing vastly more capable ASICs incorporating FPGA functionality and inter ASIC links to share SRAM.

1 out of every 10 grin possibly owned by a group of year 3-4 ASIC conglomerates

Your worry should be that a single entity dominates 9 out of every 10 grin with an ASIC that can handle anything we can come up with as our 2nd PoW…

1 Like

Right. I guess small POW tweaks are no guarantee once that is the challenge ASIC manufacturers are trying to beat. Maybe within 2 years a more nefarious/effective tweak can be designed/implemented?

Should we really be that worried about ASICS? In the first few years what incentive would ASIC manufacturers have?

With a linear emission curve is there actually going to be enough incentive for GPU miners to participate? Is there any other examples of a GPU mineable coin that isn’t deflationary? Don’t get me wrong, I like the monetary policy, however, as a GPU miner I’m doubtful that GRIN is going to be a coin that is initially profitable to mine. Especially if you’re restricting it to the latest Gen of GPUs with 8gb requirements. As is it currently stands the most profitable GPU mining coins have an ROI of around 18 months. At this rate the ROI of a 20xx series card = NEVER.

Return on investment in mining is typically calculated over 6 months to 18 months. Almost every coin I can think of would have a flat emission, just like grin, over that period. Also the ROI of a manufacturer is quite a bit shorter, compared to a miner.

Right now ROI on GPU mining is pushing out past 18 months. If you go into many Alt coin telegram/ discord groups, GPU miners are starting to switch their rigs off.

Let me rephrase that, what other GPU PoW coin has a constant reward and aims to be inflationary? I think the Monetary policy will be off putting for most GPU miners. I’m not sure if that’s been taken into consideration.

I’d also encourage you to take a deeper look at something like RandomJS. As it’s claimed on the grin-tech.com with bold font

Grin is still highly experimental technology

I’m not an expert, but anyway, I assume that if the PoW uses most benefits of the given device, than the only way to get more mining efficiency - is to make this device faster.

Thing like RandomJS seems to use the distinct feature of all CPUs - compile and execute arbitrary code. ASICs are just impossible to build because they will be no ASICs but CPUs. In future it will be possible to supplement such PoW with some only-GPU-intrinsic features (like random polyhedrons rendering, shaders or something - we just don’t know how to cleverly invent the PoW concept and PoW result validation so it would fit the small-size blockchain, but we’ll get there once day) and tweak the resulting PoW in the way so to mine a coin one would need ~1 CPU and ~1 GPU (which is equal to PC/smartphone/laptop).

I believe it’s possible to resist ASICs, but it seems like the Grin-dev team have given up on it and considers ASICs innevitable and even useful. (And I’m not arguing because I can’t be sure yoy guys go the wrong way).

Therefore another question would be, if you embrace ASICs, why not just take SHA256d instead of Coocko? SHA256d is simple enough and has been around for years.
Please don’t say “because Bitmain owns most hashpower ratio”, the same is waiting for Cucko if Grin becomes popular :slight_smile:

I’ve been following Grin for some time and I do really like the technology and the concept. I’d really be very glad if you guys made the correct decision (Which, alas, nobody knows what is). But at least think about what I wrote above. If you embrace ASIC once, it may be quite painful to run away from it in future)

Thing like RandomJS seems to use the distinct feature of all CPU.

That does not seem to be entirely true. Are all cache levels fully used? Is memory controller used? Can the random code fit into L1 of cheap ARM core with HW javascript accelerators? Are there any shortcuts that could be exploited? If you chain multiple experimental technologies, your chances of failure grow much quicker.

I believe it’s possible to resist ASICs

Yes it is possible to trivialy construct PoW that cannot have asic with high efficiency gain (over 4x, compared to cuckoo with 100x - 500x). But PoW complexity increases and verification is slower. Personally, I would combine large memory footprint, large lookups and random code execution to impose a hard limit on asic efficiency. RandomJS does not strike me as best option if you want to go that route.

Please don’t say “because Bitmain owns most hashpower ratio”

Are all cache levels fully used? Is memory controller used? Can the random code fit into L1 of cheap ARM core with HW javascript accelerators? Are there any shortcuts that could be exploited?

I’m not taking exactly the implementation of RandomJS that currently exists, as I said, I’m not an expert and I can’t make researches on these subjects. However JavaScript is a turing complete programming language, and I’m quite sure in general compilation and execution of programs written in such programming language is faster and more optimal on the CPUs, because modern CPUs are designed to do this. And the idea of taking a random code as a PoW, for me, sounds like a ultimate comprehensive way to achieve ASIC resistance. Much more universal than trying to invent a static algorithm consisting of operations whih CPU runs faster than ASIC.

Personally, I would combine large memory footprint, large lookups and random code execution to impose a hard limit on asic efficiency. RandomJS does not strike me as best option if you want to go that route.

This is just one part, “random code execution” (which by the way due it’s randomness may include large memory footprint and lookups), but anyway.

Here I’m not trying to say that current RandomJS implementation is worthy, because I don’t know. What I’m driving at is that eventually either this RandomJS, or the new RandomJS or the new ProgPoW or something new will happen, and it will be ASIC resistant, that’s I’m 100% sure because I don’t see anything that could stop it from happening.

Having this in mind, I have doubts about really great techy project being ASIC friendly (or striving to be) from the very beginning - is the right way to go. It’s possible to resist ASICs like Monero does it, harforking every 6 months until something like RandomJS (or RandomScala, who cares), probably combined with other ASIC-hostile operations (memory lookups, AES, complex math, or some comlex 3D geometry operations which GPUs are made for, whatever) finally emerges.

Let me rephrase that. CPU is ASIC (edit: I meant to say that it is technologically made just like an asic).

They can put array of ARM cores and HW modules that speed-up whatever you come up with with zero wasted power.

We already know how to stop efficient asics, it is not magic. It can all be calculated. Grin intends to keep PoW simple and use HF to prevent asics. You can’t ask them to use highly experimental PoW schemes when there is core grin stuff that needs to be implemented first.

I’m not asking anyone to use highly experimental PoW :slight_smile:
I’d just want to understand the direction of thoughts of the dev team that had led them to the decision to become ASIC friendly without a struggle (Or did I get it wrong?)

You simply cannot ever compete with determined asic mfrs. They will be cheaper and more efficient.

They can put array of ARM cores and HW modules that speed-up whatever you come up with with zero wasted power.

I’d argue that CPU is an ASIC. I mean, yes, it sort of is, but let’s put it like very complicated ASIC

And yes, I agree, putting thousands of tiny ARM processors on the board might be an answer. But a very expansive, as for me, and it hardly can be called ASIC. the same way I can go to China, buy a thousand of cheap old smartphones (or even pick those from the trash bins) and create my own farm. Yes it does make sense.

We already know how to stop efficient asics, it is not magic. It can all be calculated. Grin intends to keep PoW simple and use HF to prevent asics.

Stop, wait. So I got it all wrong? I thought Grin intended to keep ASICs away until they become cheap, simple and available to the average joe (Which I don’t believe in because of bright Bitcoin example). If Grin does want to prevent ASICs eventually - than I’m begging you all pardons for wasting your time)

I’d just want to understand the direction of thoughts of the dev team that had led them to the decision to become ASIC friendly without a struggle (Or did I get it wrong?)

Sice you cannot prevent asics, you might as well give small chip makers two years to design a chip to run very simple algorithm like cuckoo. This is what grin does. :bird: Cuckatoo :bird: lean miner is very simple to design.

But a very expansive

If it prints money and it can be made, it will be made

If Grin does want to prevent ASICs eventually

No, grin wants asics to be made. You can’t hard-fork forever, it kills decentralization. I don’t speak for grin, I’m just telling you that it is generally known how to stop SHA256 like asics with 1000x efficiency over GPUs. It is deliberate choice (that I agree with) of grin not to go that way. This project may last until Sun blows us out, you don’t want to fix pow to fit 2018 GPUs. You can debate how long to keep GPU advantage over ASIC, but not IF.