Quick update as most of my effort has been around getting things ready for HF3 aka v4.0.0.
Spent a few days last week investigating a bug that we identified in the way we handle params in the get_kernel_v2 api call. On the surface it appeared to be related to missing outputs but it was eventually tracked down to some dubious param handling when searching for kernels.
Fix was just merged to master and 4.0.x branches.
We are in the process of tagging and releasing v4.0.1 to include this fix. Announcement for this will be going out later today.
Also spent some time looking into what would be involved in modifying transaction inputs to only include the output commitment. We currently require the output âfeaturesâ in addition to the output commitment when spending an output. This is redundant data in the context of both transactions and blocks. If we can fid a way to clean this up then it will help simplify the path toward supporting âduplicate outputsâ.
As we currently enforce uniqueness in the utxo set this is not a consensus change, but it does introduce some complexities if nodes need to support both variants of transaction inputs during block propagation on the network.
Finally - spending time thinking about and discussing âreplay attacksâ and âplay attacksâ.
We still donât have community consensus on the best approach to tackling this, but we do have a clearer understanding of the problem.
Quick update on what Iâve been focused on over the past couple of weeks.
There have been a lot of in depth conversations around planning and scope for the final scheduled hardfork in January.
Simply keeping on top of these has been a full time role.
One thing we know we want to do is remove the redundant input features specified in both transactions and blocks. We âspendâ based on commitment and we currently enforce uniqueness in the utxo set so it is technically redundant to require a transaction to specify the features byte of the output being spent.
Ideally we can get rid of this dependency and simplify transaction and block serialization.
This is not a consensus change but p2p serialization is complicated here by the need to support backward compatibility for existing peers.
PR in progress here but it is a large PR with some invasive changes -
This is going to be tough to review and Iâm still not entirely happy with the changes involved.
Off the back of this I took the opportunity to refactor and cleanup some code.
These were extracted as separate PRs -
I want to spend some time this week breaking the big WIP PR apart.
I am hopeful we can get this split out into a couple of smaller self-contained PRs that will be easier to review and merge.
If we can get this finished up this week then I can move onto whatever the next priority is for HF4.
And this is going to involve thinking a bit more about scope of work over the next few months.
Not doing great on timely status updates recently - making it a priority to shorten the time between these.
I have been deep in the consensus logic investigating an inconsistency in how we deal with âcut-throughâ attempting to confirm if this could surface as an actual problem in the consensus rules.
A summary of the summary - sometimes we use the hash to identify inputs (in both transactions and blocks) and sometimes we use the commitment. The hash includes the input features byte (plain or coinbase). This inconsistency led to cut-through being applied inconsistently to blocks such that a coinbase output could be spent and replaced by a plain output, sharing the same output commitment.
This violates the cut-through rules as defined but results in a block that appears to validate correctly.
But - even though the block would validate correctly âin isolationâ a node would not accept the block as subsequent validation steps (namely spending by commitment) would result in a âduplicate commitmentâ error. So this resulted in the cut-through rules being applied correctly, but implicitly.
The above PR covers various test scenarios and reworks the cut-through validation logic to ensure we catch cut-through issues correctly and explicitly during block (and transaction) validation.
This investigation took way more time than we hoped. But it has resulted in a solid improvement in the block processing pipeline validation logic. And should make this code far easier to maintain going forward.
With that out the way I was able to get back to making progress on the âinput featuresâ refactoring that originally led to the discovery of this inconsistency.
We want to clean the p2p messages up, removing the redundant âinput feature byteâ and simplifying everything so we always just spend by commitment. A block contains a set of inputs, specifying the outputs to be spent exclusively by commitment.
This PR is now up and in review -
The complexity here is mainly around txpool management - each transaction needs to robustly âlookupâ the currently unspent outputs being spent. The may exist in the utxo but for transactions they may also be found in tbe current txpool (and stempool) for 0-confirmation spends.
We also need to support âlegacyâ peers that send/receive blocks and transactions with input feature byte included. We need to handle this in a backward compatible way while also ensuring we do not introduce a malleability issue (we do not want to simply ignore the input features byte).
My original intent during this 3 month period was to shift gears a bit and focus more on the wallet side of things. This I think was premature. There are lots of âin progressâ discussions happening right now around the future of transaction building and wallets in general.
And we are very resource constrained right now in terms of people able to devote time to grin node codebase itself.
Early in progress work -
processing full blocks is slow (both during fast sync and in regular block relay)
we still have scenarios where node shutdown can result in data corruption
These 2 points are heavily related and involve the way we deal with on-disk data, specifically the MMR files. I think there is a possibility we can avoid needing to rely on fsync as much as we do currently by moving the âleaf setâ from on disk files into the lmdb database. Dealing with the leaf set (the utxo bitmap effectively) transactionally in the db will allow the on disk files to be purely append only. And I think we can interact with these robustly without ever needing to use fsync. Rather than trying to prevent file corruption at all cost we can always recover from previous state (which is simply truncating the append only files).
This is still a very early exploration into the MMR implementation and iâm not sure yet if this is worth pursuing beyond the next few days. But I would like to see what would be involved in a transactional leaf set and writing to the MMR files without ever calling fsync on them.
Quick update summarizing the changes that I have been involved in over the past couple of weeks.
The big one is now merged. Nodes running on master are now using protocol version 3 and will
skip âinput featuresâ in both transactions and blocks, when communicating with another version 3 peer.
This is fully backward compatible, with input features still being used when communicating with version 2 peers.
Grin wallet required a couple of small changes as a result of this -
âget_unspent api changed to take a commitment directlyâ
âhandle transaction inputs correctlyâ
As part of the above work we took advantage of the changes being introduced to clean up the Output/Output Identifier pair.
And added some more robust parsing of âunspent outputsâ via the api -
It was necessary and useful to test a lot of the above with âpreferred peersâ configured.
So we had some control over which peers we connected to and maintained connections with.
This has been improved with and should come in useful for other scenarios -
We are planning to release grin 4.1.0 in the next few days. Primarily for âno more input featuresâ above along with the protocol version bump.
As part of the proposed protocol version deprecation for HF4 we are making a âmajorâ version bump as part of 4.1.0 -
Just wanted to say that grin is blessed to have @antioch, anybody whoâs following the github can clearly see the sheer amount, quality and consistency of his work. Much respect.