Unique kernels without transaction expiry

Sorry, what are you talking about? I’d appreciate if you read the doc. Take your time.

There are three scenarios to consider -

  1. verify transactions during relay (I think the doc covers this well)
  2. verify blocks without having seen the transactions
  3. verify historical kernels (fast sync for example)

I mean historical validator in the context of (3), but (2) is also relevant here.
A historical validator needs to validate kernels from months, maybe years ago and for each one we need to verify the signature.

wow, thank you for the blockchain 101 :slight_smile:

Please read the doc.

Incredible that after all these months of debates that we have had you think that I dont know there are historical validators in a blockchain, and that I would have forgotten to cover this 101.

Can’t understand why instead of targeting me with your claims (there were no question at all in your first post where you wondered about live verification), you do not take the time to read, or at least to frame your post in the form of what we call “question” in English.

Reading first, at least the first or two pages of the protocol seems to me quite natural to do first rather than imagining that I failed everything like 2+2 = 5.
Respecting other’s time requires at least that you read the first page of the protocol before throwing claims in the air.

Apologies for coming across overly negatively. That was definitely not my intention.

The proposal discusses transaction validation.
There is no mention of either block validation or historical kernel validation in the proposal. I’m not saying this as a criticism. I thought it was worth making the point that the proposal changes signature verification by requiring external data in this context.

I do think we need to discuss the tradeoffs objectively.

What is the point of doing this, rather than just including the signature height in the kernel? Is it just so we don’t need to include the full 8 bytes (u64 height) for every kernel, and can get away with just 2 bytes instead (u16 relative height)? Or is there something more subtle that I’m missing?

1 Like

Here’s the second DoS vulnerability:
image

The described “attack” ignores the fact that there will be a dynamic fee market when blocks are full. Transactions don’t just pour in at a steady rate with a constant fee. Instead, the ebb and flow of demand will result in the following 2 possibilities:

  1. Demand for blockspace increases, driving fees higher, and pushing the “attacker’s” transactions out of the mempool (we prioritize by fee, among other things).
  2. Demand for blockspace decreases, driving fees lower, and resulting in the confirmation of the “attacker’s” transactions, costing them the transaction fees.

The risk here of financial loss is simply too high, with no financial benefit for the attacker. However, if you’re still concerned, there’s always additional criteria we could add to the mempool. A well designed mempool should only ever accept txs that it reasonably expects to include in a block, anyway.

2 Likes

Ah there is no mention how to validate a signature in the proposal, really? You can speak about any numbers of trade off that you think the proposal has, I have zero issue with that and tried to be objective from A to Z in my writeup, and I am totally ok you find ones I forgot. Even many. If you know how I would have loved that you applied that same desire of objective consideration on some other proposals. That would have been interesting!

So, anyway, is there no mention in the doc that message = inclusion_block_height - relative_height ?
What else did I forgot to mention for validation?

The fact that it uses external data (the block_height) is different from not mentioning how it validates, (which I did) no? or are they in fact the same thing? I am confuzzed.

It is really incredible. it seems impossible to debate normally. The fact that is uses external data, yeah that’s a fact. totally agree.you consider it is a tradeoff? No problem. I am fine with your opinion on this. Maybe you ll find many other trade-offs. The fact that you say that i did not mention historical kernel validation is just an alienating claim, since I mentioned it countless number of times in the doc.
I will stop soon to debate with you if you continue to not keep good faith in your sayings because it is super super super annoying, and it is far from the first time.

I don’t think it’s easy to grasp the idea fully when you read it the first time because there are quite a couple of new variables so if antiochp missed something, it’s likely because the idea needs a few passes to grasp it. For instance, it took me some time to understand that you are reconstructing the missing piece of the message as a verifier in the first kernel expiry version. I hope we will debate it on keybase over the weekend to make sure we understand it and are on the same page. Let’s just please stop with the attacks and start off clean with no malice assumptions going forward. We are all sometimes hard and/or annoying but nobody is so horrible that we would need to be in conflict :wink:

5 Likes

i dont understand fully your technical math convos,but i think there is 2 part here who want to move faster and forward,think improvements slow…and the core team want to move slow…this is the conflict…

i think it needs a balance,i dont know how you manage this,but at the end this is about cryptography and mathematics…a solution is possible…

if you clever guys cant balance,it is trouble…

The attack is always claiming, not taking the time to read, affirmation, affirmation. This is far from the first time. This is a bad attitude in debates that Antioch always has with me, maybe less with others I don’t know. I really never saw that before.

I cannot count the number of claims done without any background. Isn’t it better to read or ask questions rather than claiming claiming claiming and indirectly accusing the author of stupidity or missing the 101? I have done many other scientific or technical debates in other fields than blockchain and this attitude is entirely new to me. It is excessively suprising and annoying. Even more so when repeated. People just ask questions when learning something new and take their time, breath some air, instead of being direclty unpleasant with the author of the new ideas by insinuating, or claiming, using passive-aggressive behavior, that they missed the basic 101 .

No problem. There is nothing abnormal here :wink: Nobody can understand new things immediately in general.

Yea, exactly ! So thats why you dont accuse directly the author after 5 minutes, like Antioch did in several occasions, but do your few passes first. It also respects people, their work, and their time. If something is unclear, no problem to ask questions.

Thanks for the input. I guess the highest risk is a case scenario where the context lies somwhere in between 1. and 2.? No huge demand for blockspace, not too low either.

The risk of financial loss seems indeed quite high. I have no idea how much would cost a successful DOS attack in Bitcoin. I saw a discussion on github about the DOS attack risk in Zcash (it was the first DOS vulnerability) and Zcash people said that the major risk is crashing verifying nodes, including miners nodes, which results in a potentially highly rewarding attack since you can diminish hashing power a lot and could combine this attack with a quite cheaper reorg and double spend as a result.

However crashing nodes with DOS attacks seem like something Bitcoin already thought about and is not vulnerable to, I guess, either by:

  1. Having a mempool logic and infra that never results in crashing nodes, even in the cases of the more sophisticated or expensive DOS attacks. or
  2. The financial cost of crashing nodes is in fact order(s) of magnitude higher than, say, a double spend attack.

I dont have much clue about what they have done and the threat model if any.

I misunderstood your message when I read it. I apologize. I thought that you were saying that live validators cannot verify live transaction.This is my fault, your message was pretty clear, and I did not understand it first, although it doesn’t say the fact that you don’t need onchain data to verify newly broadcasted txs. Otherwise you perfectly summarized the change relative to the need of external data for verifying on chain kernels. I thought you were saying that live verification was not possible. Also in a further message you clearly said that onchain validation was not described. this one was false since I describe it many time in the doc. Overall, I got triggered by the first message that I interpreted as “live verification is broken in your scheme” (blockchain 101), and this should have never happened if I took care to read the message correctly, which I didn’t do (not purposely), and shame on me.

For the external data, if this makes a scalability, elegance, or conceptual problem to the current mw, I would perfectly understand that it can be one reason to not do the proposal.

1 Like

Thanks for recognizing the misunderstanding.

I would like to ask a follow up question. I would appreciate if you would consider what I am saying and help think through the implications of this in terms of evaluating this proposal.

Algorith 1 “pseudocode for kernel uniqueness check with no kernel expiry” on page 8 begins with the line -

for i = 0 to N do {

where i is the kernel MMR index.

Am I correct in understanding this to be a linear search over the full set of historical kernels?

If so then I agree this solves for kernel uniqueness but unfortunately scales with the number of kernels and is not a practical solution performance wise.

We are back at the original problem with kernel uniqueness, we must either -

  • index (or iterate over) all kernels, or
  • index a subset of kernels and correspondingly require expiring transactions

The former is not practical and the latter undesirable.

2 Likes

The loop i = 0 to N is the whole IBD algorithm for checking uniqueness of the N+1 kernels of your IBD. It is not a loop, which given one kernel as input, returns if this kernel is unique or not in the list of all previous kernels.

You can do the loop written in pseudo code in parallel with the signature check of the kernels. First verify the signature for kernel of index i= i_0, then enter the loop step of the algo for the value i = i_0, then verify the signature for the kernel of index i = i_0 + 1, then enter the uniqueness check for i = i_0 +1 in the pseudo code loop, typically. And so on and so forth, up to the index i = N. So when i = i_0, in your loop, you are actually verifying uniqueness of MMR[i_0].kernel against all the kernels from 0 to i_0 - 1, but you actually dont use a list of i_0 elements for that, but just use the lists L_0 and L, which have a number of elements much smaller than i_0. So yeah, it really is done through the always-short list L (this list only contains, like a moving window, the kernels for (relative_cap + 1) blocks, and never becomes bigger by construction), and the list L_0.
For example, If your wallet has set relative_cap = 1 week, the list L_0 would actually contain the kernels between indexes 0 and i_0 - 1 which have stayed in the mempool longer than a duration of 1 week (such kernels are very rare, hence the list L_0 keeps small). And nothing more.

Once you have finished your ibd of the kernels, you will receive your first kernel for your job of live verifier, it will be the N+2-th grin kernel that you will verify (the first N+1 kernels were your IBD) and the check for this kernel will be against the list L from the previous step N+1 (the last step of the IBD check), and the list L_0 from this step N+1 too, with the N+1-th kernel being either an element of the list L or of the list L_0 (depending on its actual relative_height). Once you have finished the check for this N+2-th kernel, this kernel will end up either in the “dynamic“ list L or the in the “static“ list L_0 (assuming it is not a duplicate).

So the algorithm definitely doesn’t require that you verify each new kernel against all historical kernels, and this is true also for the kernels that you verify in live verification. I did the verification algorithm for IBD for simplicity of notation as all kernels are in the MMR in the IBD, but the exact same algorithm continues when you verify new kernels broadcasted to the network, and the list L and L_0, against which you will verify uniqueness, may be multiple orders of magnitude (think grin in 2030) smaller in size than the size of the list of all the kernels since genesis.

Together, these lists could grow in size linear in all kernels.

1 Like

Yes, theoritcally, but if you put relative_cap sufficiently high you can be quite sure L_0 stays very short.
You can do a statistical analysis of the bitcoin network to set a good relative_cap. But i agree with you, in theory it could grow linearly, if miners dont include kernels soon enough.

Note that the construction introduces kind of a new game theoretical element in the blockchain: In order that people have small list L_0, which is good for everyone as it is an ever-growing list, miners (which should use asics at some point, hopefully!) will have a good skin in the game to keep L_0 small and not be “dumb“ about that. This can motivate miners to include txs sooner, when possible, and complain less about fees, for example. I guess ultra usage of grin could necessitate to re calibrate your relative_cap, but we can maybe get a good estimation of the good choices to make by looking at Bitcoin.

I thought relative_cap has no effect on the sum size of L and L_0.
You could set it to infinity to keep L_0 empty.

I don’t quite understand relative_cap from a consensus viewpoint.
Can you give an example of a history whose validity depends on the value of a kernel’s relative_cap?