Testnet archive mode is stuck at downloading blocks

Anyone had a chance to test archive mode in testnet? i tried both Grin Version 5.4.0-alpha.0 and latest Grim 0.3.0 alpha but same issue that block is unable to download, only the header. No error log generated.

or someone can share me the full archive testnet chain_data? I need to test few nodes.

1 Like

I think @bruges had similar experiences.

Maybe he has found a solution.

You can’t run an archive node on the testnet, because nobody runs it, so you can’t bootstrap from the very first block.

1 Like

i got it, archived testnet doesn’t bring much benefits for grin eco anyways.

interesting, so how 1st archive grin node appeared?

All nodes are archive nodes, meaning they do not throw away block data they received in the time they were running. Running a node with archive node enabled means you ask all connected nodes for the raw block data to process, not the pruned set of unspend outputs and their range proofs.
Note that although the block data is full, locally the node still prunes outputs and their rangeproofs. So an archive node cannot scan for information on past transactions since their outputs are pruned. Being able to scan the full history is something that should be added in my opinion. Now it is only possible using code from Nicolasflamel.

That leaves one issue to be settled: you cannot have two utxo’s with the same commitment at one time. So we can index utxos by commitment.

But when we stop pruning spent utxos, and someone recreates a previously spent utxo, then the unpruned utxo set does have multiple utxos with the same commitment. That requires changes to the indexing scheme.

Do you mean that it would create a situation where non-archive nodes check only in the pruned set of commitments to be unique while archive nodes would check for commitments to be unique in the whole set? I can think of two solutions.

One would be a double system, so two datasets and two indexes. One for the pruned set and one for the not pruned set. In this way the rules do not need to change and archive nodes would essentially work like regular pruned nodes, except that they have an extra data directory to maintain and a second index to scan. In this case the changes would be in:
a) The functions that do the pruning, when archive node is pruned they should move the data, not delete it.
b) The functions that scans the blockchain. If archive node is set to true, these functions should check if there is an additional dataset of pruned outputs.

The second option would be much more simple. No extra datasets or indexes, just add an alternative scan function, e.g. full-scan that scans the raw block data. It would be much slower to run, but perhaps that is acceptable since it is an edge case. Most likely it would only be used when trying to regain some wallet history or when someone in paranoia mode wants to validate there where historically no mistakes in pruning. This last option would only require converting Flamels code to rust and adding an API call.

There is no requirement for unique commitments over all (spent and unspent) txo, so archive nodes shouldn’t check for that.

I’m just pointing out that if you add some functionality to archive nodes
to lookup txo by commitment, then it should be able to return multiple txos.

1 Like

my question, so in case grin needs to fork, we will fork on testnet first, then another ‘archived’ testnet will be born?