Starting a forum topic to track my weekly status updates.
I plan on posting an update here (at least semi-regularly) with an overview of what I’ve been working on and what I think my priorities are likely to be over the following week.
Starting a forum topic to track my weekly status updates.
I plan on posting an update here (at least semi-regularly) with an overview of what I’ve been working on and what I think my priorities are likely to be over the following week.
tl;dr A proposal for a minimal implementation of “relative timelocks” via Grin/MW transaction kernels.
RFC is currently in an informal review period.
Assuming there is sufficient interest in taking this further, the next step will be to nudge someone to volunteer to “shepherd” this through two weeks of formal review.
Related to this is some experimental work on the “kernel index” that will be necessary for verification of NRD kernels.
tl;dr We experimented with using the header MMR directly in v3.0.0
to track the “head” of the header chain (by definition the latest entry in the MMR).
But this introduced some undesirable behavior that surfaced with the universally hated ERROR: failed to obtain lock for try_header_head
error messages in the log file.
We have reverted these changes and gone back to simply storing the head of the header chain in the local db (lmdb).
The cost of needing to maintain this additional db index is offset by the simplicity of dealing with this transactionally.
This is still in review and not yet merged.
tl;dr When processing a block we update the local db and various files on disk (MMRs etc.)
It is possible to cause data corruption if these files are partially rewritten for any reason (power failure etc.)
This PR adds “checkpoint” functionality to allow the node to robustly restart from a “last known good” position.
Bonus feature: The checkpoint file contains a block header hash (hex format) and can be used to restart the node at an arbitrary recent block if you are brave enough.
cat chain_data/grin.checkpoint
0000018ceb445195d65c6a5a86d3fd2918bf4059605f668ef3c72d56dbaba6de
Some early thoughts around how to make our “kernel features” softfork friendly.
Still very much in early stages of thinking through what this would involve.
With the NRD Kernel RFC moving into review we should have a better idea of whether this is a worthwhile thing for me to continue focusing on.
I suspect we do have enough interest and that it would be beneficial to continue working toward getting this done for v4.0.0
(the next scheduled HF).
Alongside this I am planning to iterate on the “kernel pos” index work (https://github.com/mimblewimble/grin/pull/3273) to validate this approach makes sense.
The proposed NRD kernel variant got me thinking a bit more about what a more softwork friendly “unknown” kernel variant would look like.
I am planning on spending a bit more time playing around with this.
One necessary piece of technical implementation required by the “No Recent Duplicate” NRD proposal is a local db index tracking kernel MMR pos for each NRD kernel encountered. This index needs to support multiple entries indexed by the kernel excess commitment.
Majority of my time last week was focused on this.
We had an existing WIP PR that was experimenting with a “linked list” implementation for the output pos index. This has now been repurposed and iterated on to focus on the kernel pos index.
The index currently supports coinbase kernels as proof of concept (we need a single kernel feature to index). Once NRD kernels exist we can rework this to be NRD kernel specific.
Currently support applying new kernels as part of processing a block and “rewinding” the index to support fork/reorg.
TODO - The index does not currently prune/compact historical data. Next step is likely to add functionality here to maintai the index only for recent data by pruning old data during chain compaction.
Also took an unrelated detour last week to investigate PayJoin which seems to be suddenly getting a lot of attention over in the Bitcoin ecosystem.
I wrote some early thoughts up here - PayJoin (P2EP) in Grin
Continued to iterate on the RFC for NRD kernels -
We had a decent solution to old state “revocation” involving additional transaction kernels but @tromp came up with a more elegant solution that we think is pretty much as minimal as we can get, effectively reusing the NRD kernel from the “close” and “settle” halves of the slow transaction as part of the “revoke” process. It seems unlikely a payment channel implementation can be done with any less data than this.
@tromp wrote the details up on the mailing list here - https://lists.launchpad.net/mimblewimble/msg00636.html
I added a section to the RFC with a high level description of this revised approach, with payment channels as the motivating factor for introducing NRD kernels -
The “kernel pos” PR is getting close to done. Still need to implement pruning/compaction and clean the code up a bit but the index itself is implemented.
There is another PR up in draft with some initial work on the serialization/deserialization of NRD kernels themselves.
This PR led us down the path of thinking about exactly how to roll this out with the hardfork in terms of block and transaction validation rules. Still some more thinking to do around that but making progress on the PR.
Some final edits and additions to the NRD Kernel RFC -
Specifically, added a section on consensus rules and HF3 rollout -
In parallel I have been working on the implementation for these rules and serialization/deserialization of NRD kernels here -
Also took a detour and implemented a “proof of concept” of transaction kernel “halves” in the form of a test -
This will not be used for the NRD kernel implementation initially, but it demonstrates a proposal for how NRD kernels can be used in the future. In real world usage the excess (and offset) would be handled multi-party via musig protocol with each participant adjusting their offset to allow for a shared excess value.
It has been a couple of weeks since I last posted an update.
There are some important changes to our plan for rolling out NRD kernels.
The primary motivation for getting NRD kernel support in for HF3 was as preparation work for our payment channel implementation.
There is no immediate use case for NRD kernels and no tangible benefit until we support true multi-party/multi-sig outputs. And for this we need full musig and the multi-round tx building flow for musig (secure exchange of committed nonce values etc.)
So we plan to roll this out via a “feature flag” with it enabled on floonet/testnet and disable on mainnet.
The feature flag also allows us to have comprehensive test coverage of both scenarios (enabled and disabled) to ensure the subsequent smooth activation on mainnet, presumably at HF4.
This will allow us to take advantage of the period between HF3 and HF4 to fully leverage floonet/testnet for testing NRD kernels alongside multi-party outputs which are tentatively being discussed for inclusion in HF4.
The NRD RFC is at the end of the final comment period and likely to be merged shortly.
The ser/desr PR has been reworked to incorporate the “feature flag” as discussed above and is in final review. We are hoping to get this merged in the next couple of days.
Once this is merged I can shift focus to finalizing the NRD rules implementation here.
Quick update. Focus over the past couple of weeks has been getting everything shored up for HF3.
4.0.0-beta.2
is out and I’d encourage everybody to help by getting involved in testing this.
NRD kernels are now supported and enabled (once we hit HF3 block height) on floonet
. This was a relatively big PR. Much of the focus was on test coverage around HF3 block height in various configurations.
Off the back of those changes I refactored the existing pool tests to make NRD testing significantly easier. This ended up being a larger project than I was originally anticipating. But we ended up deleting a lot of redundant test code as part of this.
Next couple of weeks I’m going to be focused on helping review PRs and any necessary last minute changes based on testing of 4.0.0-beta
and making sure HF3 rolls out as smoothly as possible.