Not doing great on timely status updates recently - making it a priority to shorten the time between these.
I have been deep in the consensus logic investigating an inconsistency in how we deal with “cut-through” attempting to confirm if this could surface as an actual problem in the consensus rules.
A summary of the investigation can be found here - https://github.com/mimblewimble/grin/pull/3424
A summary of the summary - sometimes we use the hash to identify inputs (in both transactions and blocks) and sometimes we use the commitment. The hash includes the input features byte (plain or coinbase). This inconsistency led to cut-through being applied inconsistently to blocks such that a coinbase output could be spent and replaced by a plain output, sharing the same output commitment.
This violates the cut-through rules as defined but results in a block that appears to validate correctly.
But - even though the block would validate correctly “in isolation” a node would not accept the block as subsequent validation steps (namely spending by commitment) would result in a “duplicate commitment” error. So this resulted in the cut-through rules being applied correctly, but implicitly.
The above PR covers various test scenarios and reworks the cut-through validation logic to ensure we catch cut-through issues correctly and explicitly during block (and transaction) validation.
This investigation took way more time than we hoped. But it has resulted in a solid improvement in the block processing pipeline validation logic. And should make this code far easier to maintain going forward.
With that out the way I was able to get back to making progress on the “input features” refactoring that originally led to the discovery of this inconsistency.
We want to clean the p2p messages up, removing the redundant “input feature byte” and simplifying everything so we always just spend by commitment. A block contains a set of inputs, specifying the outputs to be spent exclusively by commitment.
This PR is now up and in review -
The complexity here is mainly around txpool management - each transaction needs to robustly “lookup” the currently unspent outputs being spent. The may exist in the utxo but for transactions they may also be found in tbe current txpool (and stempool) for 0-confirmation spends.
We also need to support “legacy” peers that send/receive blocks and transactions with input feature byte included. We need to handle this in a backward compatible way while also ensuring we do not introduce a malleability issue (we do not want to simply ignore the input features byte).
My original intent during this 3 month period was to shift gears a bit and focus more on the wallet side of things. This I think was premature. There are lots of “in progress” discussions happening right now around the future of transaction building and wallets in general.
And we are very resource constrained right now in terms of people able to devote time to grin node codebase itself.
Early in progress work -
- processing full blocks is slow (both during fast sync and in regular block relay)
- we still have scenarios where node shutdown can result in data corruption
These 2 points are heavily related and involve the way we deal with on-disk data, specifically the MMR files. I think there is a possibility we can avoid needing to rely on fsync as much as we do currently by moving the “leaf set” from on disk files into the lmdb database. Dealing with the leaf set (the utxo bitmap effectively) transactionally in the db will allow the on disk files to be purely append only. And I think we can interact with these robustly without ever needing to use fsync. Rather than trying to prevent file corruption at all cost we can always recover from previous state (which is simply truncating the append only files).
This is still a very early exploration into the MMR implementation and i’m not sure yet if this is worth pursuing beyond the next few days. But I would like to see what would be involved in a transactional leaf set and writing to the MMR files without ever calling fsync on them.