Unique Kernel Thread #73

This thread will serve as an attempt to come to agreement on the best way to prevent replay attacks via unique kernels in order to facilitate the writing of a future RFC. This is not a place to discuss the pros and cons of a consensus vs wallet based approach. Please avoid discussing wallet-based alternatives, dogmatic arguments against consensus changes, personal attacks, mentioning “the core team”, bad-faith discussions, or general off-topic comments.

The simplest way of supporting unique kernels, without having a large impact on scalability, is to introduce a new type of kernel (expiring_kernel) which simply contains an 8 byte field indicating the maximum block height that the transaction can be included in. These kernels cannot be included in any blocks more than 7 days (ie. 10,080 blocks) before that max block height.

Nodes must enforce kernel uniqueness for all expiring_kernels, which means they will need to keep all expiring_kernels included in the last 10,080 blocks, preferably indexed in memory.

While it can be assumed that most transactions will use expiring_kernels, it’s not necessary to disable plain kernels, or any other existing kernel types. It seems likely we’ll also have to have a new kernel type which supports minimum and maximum block height in order to complement the existing LOCK_HEIGHT kernels.

Though it is up to node developers to decide exactly which rules to apply to mempool logic, to avoid potential DoS attacks, at minimum the mempool should not accept transactions that are about to expire (in the next few hours, perhaps).

What are the cons of this proposed change, and most importantly How can it be improved?

8 Likes

Just for reference, I’ll repeat my con:

Bitcoin experts like Andrew Poelstra stress the importance of “tx monotonicity”, which is the property that once a tx passes initial mempool entry checks, it remains valid while its inputs are unspent.
I agree this is a nice property that simplifies thinking about tx processing. It makes it easier to manage the mempool.

It also prevents having to deal with certain unwanted complexities. Currently the tx fees protect against spamming of the network. For a tx to be broadcast worldwide, it cannot escape paying tx fees. But if you could publish txs that are about to expire, then the spammer reduces the odds of having to pay any fees (to nearly zero if they are a miner that just found a new block). This problem would be magnified once blocks fill up.

It’s also not clear what impact expiry height has on tx aggregation. Could a tx with a distant expiry get aggregated with another that has an imminent expiry and then get rejected, negating the issuers expectation? Or will we have to limit aggregation, harming privacy?

When in the distant future blocks fill up, tx expiry would be increasingly common, and it’s less clear how broadcasting resources will be covered by fees.

Then there’s the issue of what bitcoiners call “reorg-safety”.
In case of a medium size reorg, expiry risks losing transactions that just barely made it in the old branch. Particularly if the reorg mined empty blocks. Losing already confirmed transactions due an accidental reorg and not being able to replay them without having to reconstruct them from scratch could be a serious problem.

3 Likes

Thanks for writing this up. How does this affect transaction aggregation? Is an aggregated transaction valid only up to min(kernel.expiry for kernel in kernels)? If yes, does this change how we aggregate transactions to avoid possible “expire in next block” kernels?

1 Like

For the sake of clarity can we precisely define “uniqueness” in this context.

If a transaction kernel consists of -

  • excess commitment
  • signature
  • fee
  • “features” byte
  • other optional feature specific data (lock_heights etc.)

What subset of this data would be included in kernel “identity”?

It only applies to kernels chosen to be unique, so you could go with commitment only (and still allow plain kernels with the same commitment, for example). Or you could go with commitment | lock_height. Or you could go with the whole kernel. I’m leaning toward commitment only for simplicity, but I’m open to hearing ideas why different unique kernels should be supported with the same commitment.

Yes, and I believe once we have full blocks, we’re also going to run into similar issues due to the fee market. Should low fee transactions be aggregated with high fee transactions? I suspect the future might involve some kind of centralized coinjoins that only accept kernels with high enough fees to be included in a near block, and stem-phase aggregation will become less of a reality. Or maybe we’ll only aggregate transactions above a certain fee threshold (but how do we enforce that?)

Whatever ideas we come up, we should try to apply them to mixed-fee aggregations to see how they play out there.

Maybe one way to approach this is to take a step back from the implementation details (unique kernels, replay attack mitigation etc.) and consider it from the perspective of -

“Do we want to introduce transaction expiration to the mempool?”

Bitcoin would appear to favor monotonicity over transaction expiration.

Zcash in contrast explicitly took the opposite approach - https://github.com/zcash/zips/blob/master/zip-0203.rst

I believe ZIP 203 is roughly similar to what is being proposing here.

Question: If the motivation behind unique kernels is to prevent replay attacks, wouldn’t it require all kernels to be expiring_kernels in order for it to be effective? I.e. if some x% of kernels are not, wouldn’t they need to take other measures?


Also: As I was digging a bit on this subject, I found this discussion on /bitcoin where gmaxwell outlines a double spend attack that relies on an “expiration replacement race”.

Please take the following with a grain of salt. I don’t know how applicable that is to Grin, but I personally think it illustrates well some of the complexities that can arise when it comes to actually implementing mempool logic like “not accepting transactions that are about to expire in the next few hours” and expect synchronised, consistent behaviours across the entire network. It might be easy enough to specify it, but I wonder how straight forward it is to actually implement in a way that does not create unexpected attack vectors.

1 Like

Yes, transactions that don’t use unique kernels would need to take additional measures, but the recommendation for most transactions should be to use unique kernels. I just prefer not to remove kernel types since there may be uses for them.

One example is self-sending to a cold wallet, maybe you want to send with low fee and no expiration, so it gets included whenever there is free block space. A plain kernel is perfect for that. Since you control both sender and receiver, replay attacks aren’t a problem.

The attack outlined by gmax isn’t really a valid attack these days anyway, since greedy mining make 0conf untrustworthy. But it doesn’t apply here anyway, since we are talking about actual consensus expiration instead of just mempool policy.

Yes, transactions that don’t use unique kernels would need to take additional measures, but the recommendation for most transactions should be to use unique kernels.

Does that then mean that more replay attack counter-measures would need to be built irrespective of this proposal? What should happen when a user (rightly or wrongly) opts not to use an expiring_kernel as part of their transaction?


But it doesn’t apply here anyway, since we are talking about actual consensus expiration instead of just mempool policy.

Yes, but OP reads:

to avoid potential DoS attacks, at minimum the mempool should not accept transactions that are about to expire (in the next few hours, perhaps).

My thinking was that if we for example define transactions that are “about to expire” as within expiry = 2 hours = 60 * 2 = 180 blocks then there’s a situation where some nodes on the network may determine expiry occurring before others do and that through this there may be a way to get nodes to agree on different transactions being valid, concurrently, as for some nodes the previous transaction would already have expired, while for others it would still be in the mempool. Is that unfounded?

This should be considered an advanced feature. GUI wallets probably shouldn’t expose it. I don’t think our protocol necessary has to prevent every situation where someone can do something foolish. I mean, you can accidentally lock your inputs for 30 years in the future using the lock_height feature today. But the standard way of transacting absolutely should be safe for all users, regardless of knowledge level.

Sure, but this is only a problem if someone is foolishly accepting 0conf transactions. We know enough now to know that’s never safe. Users should not rely on the mempool for doublespend protection.

There hasn’t been any more feedback, so I’ll go ahead and respond to every remaining point to see if we can come up with an agreeable solution. I believe only @tromp has issues that remain unaddressed, so I’ll go ahead and take a stab at those now.

While I appreciate that Poelstra is brilliant and has a deep understanding of crypto & blockchains, I don’t see any technical arguments in this paragraph, so there’s nothing for me to refute.

Why do you think this is the case? Is it just because you think miners will include it since they don’t want to miss out on fees? Because game theory would suggest that’s not true.

With empty blocks:
None of this matters at all since miners will include any non-zero-fee tx they receive.

With full blocks:
If there are txs with higher fees, they will include those instead. Otherwise, competing miners will end up with the higher fee txs. If the concern is that people will use short expiration to get included quicker, miners would easily recognize that and realize that once the tx expires, the tx originator will just create another tx. There’s no financial advantage to accepting lower fee txs when there is not a mining monopoly.

The only aggregation we do that is useful for privacy is stem-phase aggregation. Stem-phase agg already has the problem where someone could try to take advantage of someone overpaying, so we already need rules about aggregation. It’s trivial to add an additional rule that only aggregates when transactions do not expire in something like less than 90% of whatever we choose for the default expiry. So, if we decide the default tx expiry should be 1 day, we won’t aggregate anything that expires in less than 22 hours. And of course, the tx originator can always resend if it’s unhappy with the results of aggregation.

In other words, this is just a special case of users aggregating low-fee transactions, and in practice, does not make the situation any worse.

This is a very rare scenario, but something worth discussing. This is just another form of a double-spend attack. In large reorgs, the few transactions performed by the attacker already cause havoc on transaction graphs. This just provides one more situation where that can occur. The solution to the problem is the same as we already have today: adjust confirmations required according to your desired security needs.

TL;DR: All of the attacks in here are just variations of things that already exist. This includes various attempts at aggregating low-fee txs with txs that overpay (which isn’t even necessarily a bad thing), or already-problematic double-spend attacks. Even with no additional rules added to stemphase aggregation or mempool acceptance (aside from making sure txs are valid ie. not expired), then the situation is not much worse than today. And with minor tweaks or simple rules that can be added to the mempool, I see no reason we should expect anymore problems than we already have.

1 Like

What I’ll mention is not directly related to the subject but rather to re-orgs in general.
In Mimblewimble, doing a reorg whereby you would only replay some specific transactions of your interests and include the other ones in the block of your re-org blockchain is quite more difficult to achieve than in Bitcoin. Indeed, to do that, in Grin, the attacker would need to have saved the individual kernel offset of the original tx he sent and wants to replay. The reason being that if the attacker wants to only replay his tx, then he will have to adjust accordingly the kernel offset of the new block by the value of the kernel offset of the original tx. Thus, Grin is at risk of not enabling financially-driven attackers that would not want to hurt the chain more than necessary to keep included all the original transactions that they are not double spending in the new longest chain; and this is not necessarily a positive thing.

TLDR: they are very likely to (be forced to) remove all the original transactions of the reorged blocks, making potential financial losses to a lot of users.

To fix this, I would suggest that at some point we include in wallets an API that saves all our kernel offsets and partial kernel offsets.

In some countries, institutions that help drug-addicts to use their substances in specific centers with decent conditions that provide less risk to the health of the users do exist, and are probably a good thing in general. This API would lay in the same spirit; providing an API useful to minimize the negative impacts of some reorgs by facilitating the job of an attacker that would like to minimize the general impact of their attack.

1 Like

I have a couple of questions:

  1. I think IBD would require a sliding window of 1 week to validate no duplicate kernels were present
  2. If I understand it right, the maximum_inclusion_height - current_height defines the attack window. What happens if the window is 60 blocks (1 hour) or a day? Doesn’t this mean the attack is possible within this interval?
  3. This might introduce a time limit for miners to collect the fees. Let’s assume we have full blocks. Will people converge to sending transactions that are just about to expire to incentivize the miners to pick the fees? If they don’t include it in the next 2 blocks, they can’t pick up the prize
1 Like
  1. Correct. We already do this, or something very close. If we choose maximum timeout of 1 day, we don’t have to change IBD at all.
  2. No, because there would be a duplicate kernel, so their tx won’t be valid. The solution is not just expire kernels - the solution is no duplicates, and we provide a window with which the transaction is valid, so they only have to check that window for duplicates.
  3. This is what I just tried (and apparently didn’t do a good enough job at) answering in my previous comment. In cases where blocks are full, you’re suggesting miners will delay including higher fee txs in order to accept lower fee ones about to expire in order to avoid missing out on the prize. But in order for them to accept those lower fee transactions, they have to give up higher fee transactions which will be picked up by competing miners in the next block. Therefore, they’d end up missing out on an even bigger prize. The only way when what you’re suggesting is true is in the case where a single entity holds greater than 50% of the hashpower, but that’s already bad for all kinds of other, more serious reasons.
4 Likes

Yet making it easier to reason about tx processing is a large benefit.

Because if a tx expires after height H, then broadcasting it when block H is just found but not yet relayed will waste resources.

It makes sense for miners to use the same mempool acceptance criteria as any other nodes, and reject txs with fees below the required minimum. It’s in their long term interest to have everyone know that minimum fees are enforced, else many people will try to sidestep fees (paying just one nanogrin) by sending them directly to mining pools, and there will be less miner income in future.

Having to make trade-offs (i.e compromises) between different objectives is not ideal, and not trivial.

Doesn’t solve the problem that expiring transactions sometimes can’t be replayed in a reorg.

I much prefer adding simple rules in the wallet, requiring no additional censensus model complexity.

This still isn’t much of a technical argument, since it’s not falsifiable or something I can attempt to “solve.”

Mempools shouldn’t accept transactions about to expire. It’s a one-line change.

If they don’t pick up on those fees, even if low, another miner will. It’s simple price competition.

It’s not clear which trade-offs and which objectives you’re referring to. Nearly all transactions will be broadcast shortly after creation, meaning it will have most of the default expiry. Those will be aggregated together with no issue (assuming similar fees). I think we’ll see that full blocks and dynamic fees already ruins our chances of ever having any significant stem-phase aggregation. I hope I’m wrong, but I don’t see how it could ever work well.

Correct, it’s unsolved in the same way that any large reorg can always lead to transactions that can’t be replayed.

Yes, this has been made abundantly clear, but I’d like to focus on solutions that don’t sacrifice privacy, and don’t require bloating the blockchain with outputs that are never spent.

2 Likes

On tx-monotonicity, Andrew Poelstra remarked:

"As you say, the benefits to monotonicity include

  1. You can’t get out of paying fees by broadcasting near-expiring transactions (assume additional network rules like RBF prevent you doing the same thing by double-spending)
  2. Harms to aggregation, both noninteractive and coinjoin-style
  3. Reorg safety
  4. Harms to second-layer protocols (though I assume these would work if you just required participants to disable non-monotonic features).

The only thing I can think of that you didn’t mention is:

  1. Caching: if transactions expire at a fixed time this is not too bad, but
    if the expiry conditions are more complicated then you have to constantly re-validate transactions in your mempool to check if they’ve expired
    "

On your reply, he remarked:

"But he seems not to understand the concern about fees (he says “miners will include higher-fee transactions instead” as though that were a solution, rather than the problem).

He suggests solving problems with aggregation by adding epicycles, which I don’t care to analyze in detail, but he says “this is just a special case of users aggregating low-fee transactions” which is definitely untrue, both because the aggregation function is different (weighted average for feerate, minimum for expiry times), and more importantly, because reasoning about fees is a 1D problem (efficiently optimizable) while fees + expiry is a 2D problem (knapsack problem, NP complete).

Then he suggests that the reorg safety issues around transactions which automatically become invalid regardless of user/miner choice are commensurate with reorg safety issues in the presence of active double-spend attackers.
It is not. In the case of expiring transactions the result is an unreliable
network. In the case of a double-spend attacker the result is unreliable
transactions when dealing with unreliable people."

He clarified that “It is a term from history of science to refer to addition of orbits upon orbits upon orbits, to try to make a broken solar system model match observation. I’m using it by analogy, see
https://en.wikipedia.org/wiki/Deferent_and_epicycle#Bad_science

4 Likes

Did you talk to him about replay attacks and output ownership?

I explained that tx expiry was proposed as one way to deal with replay attacks. Don’t know what you mean by output ownership.