Wishlist for v5.0.0 - the last of the scheduled hard forks

Introduction

The scope of Hard Fork 3 / v4.0.0 was defined last week.

Around January 15, 2021, we expect Hard Fork 4 / v5.0.0 to occur. This is the last of the scheduled hard forks. It’s unclear whether there will be more added, but if things play out according to the current plan, we should not be expecting to introduce consensus breaking changes after that version.

So what really needs to go into v5.0.0 for Grin to be in the best possible shape on the consensus layer by then?

If you have ideas that require consensus changes, v5.0.0 will be your (only?) chance to see them implemented. We’re only a few months away before scope will freeze (around October 2020). Now is as good time as any to make your case.

I’m adding things on the top of my head. Comment below, and I’ll update the list. Some might be targeted for v4.0.0 already, but I think they ought to be listed as well, in case target is missed. The more, the better at this point.


v5.0.0 wish list

in no particular order, or priority, items that are consensus breaking or benefit greatly from being introduced through a hard fork

  • Baseline method of transaction building. Some “good enough” approach that can be promoted as the minimally viable transaction building method.
  • Relative time locks. To enable future payment channel implementations.
  • Soft-fork support. Offering a clear path for conducting soft forks, should we want to.
  • Getting rid of txhashset.zip Utilizing RFC#0009-enable-faster-sync to improve IBD process.
  • Improvements to dandelion / tx propagation. Not properly defined yet, but perhaps try to iterate based on previous conversations and proposals like Objective-dandelion.
  • Support for duplicate outputs. Likely required for simplifying payment channel designs.

What else? Anything that should be removed? Write below!


Changelog

  • April 08: adding duplicate outputs
3 Likes

Support for duplicate outputs, likely required for simplifying payment channel designs.

4 Likes

Question for @tromp: is there anything that should be removed or cleaned up from the codebase once only a single proof-of-work is accepted? Would it be consensus breaking?

All PoW code must remain to verify the initial 2 years of header history. Fortunately, this only constitutes a few pages of code.

Is there a feasible idea on the horizon towards non-interactive kernel merging aka uLTiMaTe sCaLabiLiTy?

Two wishlist wishes:
Realistic possibility would be multisig wallets (although I imagine this can be done without a fork, so then not a priority).
Less realistic would be stratum v2/betterhash mining algo (would also provide a better possible voting mechanism if one is ever adopted).

1 Like

Not sure of this is the right place to post this, but does Grin already have a delayed block penalty system or something else to protect it against a 51% attack? If not, it might be prudent to add something like this to Grin while we can.
See the delayed block penalty system from Horizen (ZEN) which I think is rather simple to implement:

I believe the model you linked has a subjective consensus due to the possible difference in what a “local chain looks like”. So if you have not received the last block yet and you see the attacker chain, you will penalize it differently than other nodes which may have received an additional block. On top of that you have the “new node problem”. A new node joining the network can’t tell which chain is old/new and will sync to the attacker chain creating even more issues due to the subjective consensus.
If you want to keep consensus objective you need to assign a score to a chain in isolation instead of comparing it to your local chain.

3 Likes

In addition to the problems @oryhp points out, the horizon rule also allows an attacker with less than 50% hashrate to split the remaining hashpower in roughly two halves extending different branches, and time the release/geography of new blocks on either branch to make miners on each branch more and more reluctant to switch over.
The proposers only/mostly thought about how the new rules deal with old attackers, but fail to realize the large scope for new attacks enabled by the more complex rules.

3 Likes

Well good thing i posted it here, otherwise I would have never understood the drawbacks. Having a new node mine an attackers chain or having subjective penalties does not seem like a huge problem to me since the network as a whole still operates fine, but that it can be used to split hash power and creates more opportunity for complex attacks is indeed a huge problem😓.
So basically there is still no good way out there to make 51% attacks more difficult.

I think it’s a bigger problem than most realize. Subjective consensus opens up a possibility of splitting the network in two parts even when everyone is following the same rules. I’ve seen many claim to have solved 51% attacks and most of them ended up with a variant of what you linked which loses the objectiveness that Nakamoto consensus has.
Btw the problem is not that the new node is mining on the attacker chain, the problem is that nodes no longer agree on what the latest state is. This can’t happen with Nakamoto consensus.

2 Likes

I am still trying to wrap my mind about the consequences of having subjective consensus :thinking:. Therefore, I thought it might become clear while discussing some examples. So let us say you have split/two versions of the blockchain by an attacker that mines in private on a chain which he removed one of his own transactions. He is penalised by other miners based on the time that he was hiding his chain (based on how many blocks back he change diverges) so he and a few non-up to date miners who accept his chain have to keep mining it before being accepted by other miners as the ‘True’ version of the block chain effectively making the attack more expensive. Only when being accepted as the true version (longer and penalty time is met) the chain will be accepted by other miners. So far that all sound like a good thing to me. I mean existence of different chains also happened in Nakamoto consensus.
So if I understand it correctly the problem is that when the network is purposefully disrupted (e.g. disrupting one or more intercontinental internet cables), the hash power is divided for a longer time due to the penalty system making the network vulnerable for another attack. However, even if this opens a window for a subsequent cheaper attack, let’s say a 25% attack on the longest chain on one side of the channel, this attack would also be more expensive again due to the same penalty system.
To help make it clear, can you (@oryhp or @tromp) give a practical example of a more complex attack or other issues that arise from a penalty system or the resulting disagreement between nodes that I might have missed in the example above?

I think you’re confusing two very different things. The fact that we have Bitcoin and BitcoinCash is not a consequence of Nakamoto consensus. Nakamoto consensus guarantees that if we had a complete graph of full nodes (each node would be connected to all other nodes) and you asked each node which chain was the ‘reality’, all the nodes would point to exactly the same chain. This is a consequence of the consensus being objective. In reality we have a p2p network due to real life constraints, but this property is the same. This is no longer true in the idea mentioned above.

My main issue is that subjectivity in general is complicated, hard and has more attack vectors - usually on the social level. It often introduces politics and social attacks (Ethereum is a great example imo). The best thing is to keep it simple and objective solutions are simpler just because they are more predictable than their subjective alternatives. I’m sorry, I don’t really know how to explain this in a good way.

If you are ok with a subjective consensus you might as well think about picking something simpler like add a max reorg cap rule to the consensus. A local blockchain can’t be reorged for more than K blocks. This gives you some sort of finality at least - btw, I’m also very much against this.

I’m not good at coming up with examples fast, but I assure you that in the long run you want to keep these mathematical properties.

I’d like to add that objectivity is a far more general topic than just Nakamoto consensus. Global systems that are objective will scale better on a social level because they will remain equally fair to everyone. Simpler designs will scale better because they have a much smaller attack surface and have a better chance of a serious Lindy effect in the long run.

1 Like

Mmm, ok, thanks for trying to clarify it to me. UntuitifelyI do understand that simplicity and consensus objectivity between nodes are desirable as a systems property. So I get what you are getting at even though I can not fully explain it myself.
I can also understand that scaling and future prospects will be better with a simple minimal predictable system, which is also more in line with the design philosophy of Grin. Anyhow, it is good food for thought. I think I will think more about this topic for some time to better comprehend all aspects and possible scenarios that can happen.

I thought it would be interesting to hear a bit more about the Horizen (ZEN) reasoning to implement the delayed block penalty system and therefore had a chat with Robert Viglione, one of the founders of ZEN.
Below a summary of his reply:

“We focused the penalty on preventing the routine 51% vector, and we considered a variety of edge cases to make sure we weren’t introducing new attacks.
That said, there are always ways to attack distributed networks, and the decision logic on our end was to close a routinely used vector in a way that didn’t have obvious, remotely likely, and/or opportunistically economic attacks
We’re well aware of potential issues with network partitioning, hence the 4 block grace period before penalities are assessed. There’s virtually no chance an honest node would fail to receive solved blocks from the network in that period. We’re talking network broadcast latencies in the seconds, not ~10 min range, so we set the threshold to give us a nice balance in significantly enhanced security with the extremely unlikely tail risk of there being a global network partition in which honest nodes failed to receive blocks within that window. And we build the decrement into the penalty so that, even in the event of this extremely unlikely network partition, there’s a convergence mechanism.
Simulations, and then real world operations for a long time now, seem to indicate it was a good tradeoff, and that’s the point. It’s not about creating deterministic outcomes in this world of distributed networks, but about reasonable tradeoffs, and so far there’s no indication we took the wrong path :slightly_smiling_face:

I do not want to take a stand as a proponent or opponent of Horizen’s delayed block penalty system. Of course, it is fine that different crypto projects make different decisions both on good argumentation. However, considering how many projects suffered reputation damage from 51% attacks and considering how many of these projects implemented similar systems as Horizen, I think it might be good if the core team members make it a ‘deliberate’ decision to implement or not implement such a system. If the core team members already had a good discussion about the topic before that is of course not necessary. However, if the topic has not been discussed much, it might be prudent to discuss the topic at least among the core members to list pro’s and con’s to formulate the projects stand on the matter in case a 51% attack ever happens.e

We made a deliberate decision to adhere to the longest (most-work) chain rule, in the firm belief that any “51% attack mitigation” introduces more problems than it solves. I think the best mitigation is an economic one; exchanges should be wary of being sold a large amount of grin and sufficiently delay the payout to reasonably minimize the risk of a deep-reorg with rented hashpower. Grin might be somewhat better off than other coins in that only high memory GPUs are suitable for such an attack and these are harder to rent in large quantities.

1 Like

Glad to hear good thought was put into the decission. True, staying away from ASICs and focussing on high end videocards makes it much more unlikely for a 51% attack to happen. And using common sence and waiting a couple of blocks to confirm large transactions also mitigates the risk.
For the long term I thought it was planned to allow some ASIC mining on Grin, maybe at that time the decission can be reevaluated. Planning ahead, the possibility to implement a delayed block penalty sytem might be an interesting feature to introduced via a Soft Fork, or would it require a hard fork? Anyhow, hope it will never be needed in the first place but always good to consider worst case scenarios and leave a bit of room to inplement changes after hard fork nr 4.

1 Like

We’re not staying away from ASICs. We’ve just set the target of 0.5-1GB of SRAM so high that it could be several years before ASICs make economic sense. But we certainly do welcome them.

Quite the opposite. ASICs would make a 51% attack much less likely as such hardware is generally not for rent, and its value depends on the health of the coins it can be applied for.

No; it’s a misfeature.

1 Like

No; it’s a misfeature

I mean that the feature/mis-feature might be needed in the future. Maybe it is indeed a misfeature :stuck_out_tongue: Anyhow, since there has been a lot of discussion on what Soft Fork actually would/could be used for, this is an example of something that might be needed. I stand corrected about the ASIC’s, asuming of course no other large project implement similar mining algorithms in which case their profability does not depend anymore on Grin.