Unique kernels without transaction expiry

L has always a “size“ of relative_cap + 1 blocks (more precisely, L contains all the kernels of the last relative_cap + 1 blocks (where i also include current block in the count)).
So a small relative_cap will result in a list L_0 of very big size, for example. And relative_cap very very big, as you said, surely keeps the total number of elements of L Union L_0 always smaller than M = max_number_of_kernels_per_block*(relative_cap + 1) (which can be quite big if relative_cap is really big). But even setting relative_cap = 1 year is interesting, for the grinners of years 2040 and above, it will mean maintaining a list of kernels 20 times smaller than a naive kernel uniqueness check.

relative_cap is a wallet parameter. It “replaces“ relative_max (a consensus parameter) present in the kernel expiry proposal.
So relative_cap is really a wallet parameter that the user can choose, and has nothing to do with consensus, and is common to all kernels. It is not a per kernel value. the per kernel value is called relative_height in the doc. The pseudo code works with any value of relative_cap that the user would set. I did not cover the scenario where a user (for example because the blocks become full) would change the value of its relative_cap. But he must be careful anyway to change it because the algo assumes a fixed relative_cap (aka user set it before ibd and never change the value of it later), despite there are probably procedures that could change it (changing it changes your lists L_0 and L).
So, relative_cap is a wallet parameter, useful to do the uniqueness check in the proposal for non expiring kernels (the algo presented in pseudo code), since the non-expiring proposal doesnt set a maximum allowed relative height as it is the case with the kernel expiry proposal (called relative_max in the doc).

After the IBD, your proposal still requires that a node has the entire kernel history available for uniqueness checking lookups (in line 15).

Yes, it is correct.

when you check the uniqueness of a kernel with relative_height strictly greater than relative_cap, you need to look up some old kernels (not a lot, but indeed they could be old). In particular a nimble node that prunes kernels would need to query some old kernels to non pruning nodes (with the inclusion proofs) if it encounters a kernel with relative_height striclly greater than relative_cap.

Edit: A nimble node could still prune the excess and the signatures nonces. So it would get rid of about two third of a kernel if it cant query and then check the uniqueness data in real time. This is true because it is better to just apply uniqueness check on s, where kernel = (excess, R, s), Could even think to apply uniqueness check on the 16-byte troncations of s, or something like that for example. (if all 16-byte troncations of s are unique, then it means kernels are unique)

First, good job on the proposal and thanks for making such a nice pdf.

I’ll mention the things that went through my mind as I was going through it. Please correct me if I misunderstood any part.

Kernel expiry

I think the first kernel expiry solution has the following side effects:

  1. the transaction building must be finished in relative_max minutes, otherwise the transaction is invalid before it is broadcasted
  2. I can publish a tx 5 blocks before expiration and the miner would be incentivized to include it in one of the next 5 blocks because it is the only opportunity to pick up these fees (this might converge to people doing this when blocks are full). I’m not really sure about this one.

No kernel expiry

I’m not sure about this claim:

As an important note, it is useful to see , similarly to the case of the proposal with kernel expiry, that we must recommend all validating nodes to refuse all newly broadcasted kernels that have a signature block height equal to a block height in the past. For the present uniqueness proposal without kernel expiry, the reason is not security or protection against DOS attacks like in section 3, but is simply for scalability purpose, since the amount of data to store a small relative height is always smaller than the amount of data needed to store a larger relative height, which a signature block height set to a block far away in the past would imply.

an uint32 variable takes a constant size regardless of whether it holds value 0, 1 or 2^32. So unless one puts an upper limit to the number by consensus (which introduces expiration in the kernel expiry version), you need to have a type to allow the biggest inclusion possible (to avoid having expiration) and hence don’t end up saving space (I think).

Regarding the L0 check,I think we might need to think how much of a hit on the resources are the lines 10-18 to avoid any possible resource attacks e.g. what happens if someone creates 1000 transactions that have a very big relative locks? would this trigger this L0 check for all of them and would take a lot of time to compute? If such thing was possible, it can open attacks on miners as well and encourage them to instead publish empty blocks. Again, I don’t know if this is possible, it’s just something that went through my mind.

As you and @tromp already discussed, this does require one to keep all the kernel public keys, so a nimble node would go from a constant size (accumulator curve point) to a linear size, but with the reduction by 2/3 due to dropping everything except the kernel public key.

I have one question regarding the IBD. Does this require a linear IBD validation or can it be done in parallel?

P.S. I’ve never thought about parametrizing validation. It’s an interesting idea

Thanks a lot for having digged into the proposals Phyro, and to provide this post.

This is only true if signature_block_height = current_block_height.
signature_block_height can always be a block in the future for example, that does not exist yet (see the part on timelocks).

Yes this is something to take care of in the expiry proposal. You can look at the part on the DOS attack. In the first vulnerability, nodes refuse transactions when they see a signature_block_height set into a block height in the past.

It is similar to the claim above, nodes refuse new transactions with signature_block_height set to a block in the past. This is to save data.

Use one uint8 (or uint’x’) when relative_height fits inside it, use two unint8 if not (and so forth and so on). It makes the blockchain more scalable.

Kernels ending up going inside the list L0 are kernels that waited a lot of time in the mempool (more than relative_cap if signature_block_height = current_block_height, for example). This is not really up to the sender of tx, who can only choose fees to have an effect on this. The rest depends on miners and if blocks are full.

As mentioned in the post above, it is s that we must check uniqueness against, since s will always be different for two different messages (assuming valid signatures). We can prune public key and public nonce.

Good question. But I would think that yes.
Providing that you do an additional check at the end of the parallelization that the union of the final lists L0 (one output L0 list per thread) contain unique kernel only (it is because, in the algo of the pseudo code, the uniqueness check for kernels that are candidate to go in L0 (the ones with relative_height strictly bigger than relative_cap) is against the list L0 first. So if you do threading you may have partial L0 lists only in each thread.

EDIT: what i just said here is not sufficient. But i think it is possible, but would require a bit of work to formalize - you never know which are the block heights of line15 in the algorithm of the pseudo code, so all the kernels in the lists L0 would require additional checks at the end of the threading. Not sure how (more) efficient it could get. It will al; boil down to how many kernels are in the lists L0. If not too much then parallel IBD faster. But no formal proposal yet :slight_smile:

Indeed, users can choose their relative_cap.