[Antioch] Status Update Q1 2021

Funding request for reference -

I have spent the past couple of weeks getting back up to speed after taking some time off in January.

I am focused on “block archival support” currently and have made progress with some ancilliary work necessary to support this.

Tracking issue for “block archival support” -


Investigation into block archival identified a performance issue related to block sync and how we identify “missing” blocks that need requesting from peers -

This is resolved here -


Also uncovered an issue with banning peers when requesting too many blocks (i.e. during archival sync) due to “abusive” peer behavior (p2p msg rate limit exceeded).

As part of block archival support this has been reworked to introduce “rate limiting” to maintain acceptable p2p message rates without exceeding the limit and risking getting banned.

Fix is here -


Also tracked down the “block not found” error that we occasionally encounter during node restart.

This is actually related to some PIBD initialization code and we were not fully accounting for a likely edge case here related to how we rewind based on the most recent archival period (every 720 blocks). The fix is minor and went ahead and merged this.


I have some code running locally that successfully begins requesting historical blocks (currently from a known set of “preferred” archival peers).
I am hoping to get some of this work into a PR’able state over the next couple of days. Roughly speaking it suppresses the “state sync” (txhashset.zip) when running in “archival mode”, forcing the block sync to begin syncing missing blocks from height 1 onwards. Now that we can successfully rate limit the p2p requests we can simply continue block sync for all missing blocks up to the current height.

To release this fully we will need to support peer selection/filtering based on archival node support so there is still some work to do there. I suspect we may still want to take advantage of “preferred peer” support to ensure we have good connectivity when starting a new archival node up. But I think that’s acceptable if a node opts-in to archival mode and wants to reliably sync full history - you need to know at least a couple of archival nodes to sync from.


I am aiming for some kind of “beta” of this in the next week or so and hopefully we can be doing some wider testing of this functionality soon after that.

17 Likes

Thank you.


1 Like

Thank you so much for this status update!

Unsurprisingly this update is late…

Continuing to make progress on “full archival sync”.
Tracking issue here -


I have a full archival sync running locally and it runs to completion with some caveats.
Hoping to have this out for a wider beta release later this week.

The big caveat is the chain compaction process is slow when used alongside archival sync.

This led down a bit of a rabbit hole exploring various options for making chain compaction more efficient (or doing it less often, or limiting volume of data being compacted etc.)

This is still a continuing investigation.


Plan for the next few days is to park the compaction investigation and move “archival sync” forward so others can test it out. And then pick up the compaction investigation after that.
Ideally archival sync runs (albeit slowly) without a significant pause at the end for chain compaction.


Why even run chain compaction if node is “archival”? What is there to compact?

An archival node maintains a full block history in the local db.
Compaction allows us to maintain the PMMR data structures in an efficient way (we can prune and compact historical spent outputs). The block history means this PMMR data can still be pruned and it is desireable to do so as this is effectively redundant data on an archival node (and takes up significant disk space beyond the blocks db).

3 Likes

As long as it is less than Bitcoin’s full node, you will not hear me complain. Archive nodes are probably only for those interested in investigating the blockchain for which they will have space available. E.g. I would probably export/convert it to a graph database, not exactly memory saving :stuck_out_tongue: