Idea to fix both play and replay attacks

I introduce an idea here that may be sustainable to fix replay and play attacks. I have yet to thoroughly think about the full implications of it, but I believe it is a possibility to study a bit.

Current situation:

The current ideas to fix replay attacks are either using coinjoin and unspendable outputs to fully prevent them (a Phyro’s idea, augmented by Tromp) or to verify kernel uniqueness. Both have their trade-offs and the choice of the solution would provide different threat models for how to treat play attacks.

Proposed construction:

The general idea of this proposal is simply to enforce the creation (i.e. computations) of the partial kernel offsets as being both output-dependent, and time-dependent (a.k.a block height dependant).

In this proposal, the partial excesses are computed after a) the creation of the tx outputs, and b) the calculations of the partial offsets, by following the procedure introduced below.

Receiver side:

  1. The receiver first creates their tx outputs, and their associated bulletproofs
  2. Then, for each of his tx outputs, the receiver derives the corresponding receiver_output_offset by the following formula:
    receiver_output_offset = Hash(block_height || P), where P is the associated Pedersen commitment for that receiver’s transaction output, and block_height the block height at transaction building time.
  3. The receiver computes the unique private key sk (to be used for calculation of its partial kernel excess) that verifies r = sk + receiver_output_offset, where r is the blinding factor for that output.
  4. He repeats the previous first three steps for each of its tx outputs
  5. He can then derive its (unique) partial excess secret key receiver_sk_excess:
    receiver_sk_excess = sum(r) - sum(receiver_output_offset)
  6. The receiver can now provide a valid signature for his partial excess, and his part of the transaction balances out correctly.

Sender side:

  1. Sender does the 6 steps above for his change outputs.
  2. He does all the steps above from 2. to 6. for his tx inputs.
  3. And he can derive his sender_sk_excess by adding the sender_sk_excess_1 (obtained in step 1.) to sender_sk_excess_2 (obtained in step 2).

Transaction building:

Now the sender and the receiver can together sign the total excess and build a valid transaction.

Note that no total kernel offset is provided together with the kernel, as it is currently the case for Grin txs:

Indeed, since the kernel offset is derived directly from the block height and the Pedersen commitments appearing in the transaction. We provide the details below.

First rule:

the senders and receivers have to append the block height that they used to compute the offsets to each of their tx inputs and outputs. And this block height should stay with each outputs and inputs up to inclusion of these outputs in a Grin block.
This will not end up as on chain data: the miner will derive himself (using the “second rule” mentioned below) the total offset for his block, and put that total offset in the block header, similarly to what is currently done.

Second rule:

Live verifying nodes derive the total kernel offsets of the txs by using the following formula:

total_offset = sum(Hash(output.block_height||output.P)) - sum(Hash(input.block_height||input.P))

Note that this also works when several transactions are aggregated together.

Third rule:

Live verifiers can verify using the above equation that a transaction balances out by using the above formula for the kernel offsets.

On top of that, live verifiers will also apply another rule: each block_height must be comprised between current_block_height - 1 and current_block_height + 1 to allow for a bit asynchronicity (tx building block height might be off by one block to actual transaction broadcasting or verification block height)

Properties of the construction:

  1. Given the information of a transaction (kernel, tx inputs, tx outputs, offset), one cannot use the same kernel (for example during a play or replay attacks) and using different outputs and adjust the offset. This is true because changing an output changes the partial offset for that output because a Hash of the Pedersen commitment is used to derive this offset. In other words, changing the output would also necessitate to change the excess to adapt to it, and this cannot be done by the receiver, or the sender, without the help of the other party. This prevents replay or play attacks based on kernel offset adjustments.
  2. it is impossible to reuse (through play or replay) a kernel excess after one block height has passed. This fixes the problems of play attacks. This is due to the block height appearing in the hash to derive the offset, and the rule provided in the section above (section “Third rule”).
  3. Replay attacks within the 3-block timeframe are not possible if we do not allow duplicate outputs

Comparison with tx expiry proposal:

tx expiry proposal is used to verify kernel uniqueness in a pretty scalable manner.
In the tx expiry proposal, there is some DOS vulnerability.

With the current proposal, live verifying nodes refuse outputs (and their associated tx) that have a block_height outside the range current_block_heigh (as they see it) and + 1 or - 1 (see section “Third rule”). They do this the same way as when they refuse a transaction because the signature of the excess is not correct, for example. This avoid txs to make it to the mempool, and as a consequence prevents DOS attacks on the mempool.

Compatibility with David’s proposal:

@david recently proposed a proposal to eliminate the final step for tx building.
This proposal is compatible with this new tx building method if we modify it a bit:
The modification consists in only the receiver’s public nonce is derived by the sender. There is no need that the sender derives the partial excess for the receiver if we do not put the total excess in the message of the excess’s signature, which does not hurt the security. That way the receiver have the freedom to create the partial excess (which ultimately depends on his output) using the 6 steps described earlier.

As I said at the beginning, I have not fully thought about everything, but it seems like it can be a way to prevent play and replay attacks without hurting security or usability.

Issues:

a) due to the inherent nature of this proposal relying on live verifiers (and live verifiers only) to verify that the kernel offsets are derived according to “Second rule”, this proposal does not seem actually resistent to reorgs.

b) as is, it seems the proposal directly enables to link outputs and inputs because the kernel offset is fully determined by the outputs and inputs. I think a way to fix this is to separate the kernel offset in two parts: kernel_offset_inputs and kernel_offset_outputs.

We could try to modify the protocol so that:
sum(kernel_offset_outputs) = sum(Hash(output.block_height||output.P)) .
and kernel_offset_input should be provided by the sender with the kernel, same as we do today for the total offsets. individual kernel_offset_inputs are also subject to aggregation accross blocks, like today, That way, oncce aggregated, it is not possible to link outputs or inputs to kernel, providing the same provacy as today. But I think replays using kernel offset adjustments are possible with this solution. So, it would in fact need a fix.

7 Likes

Great proposal for when for example the mobile wallet is a bit more developed. In this way the sender can simply scan a QR code, be sure the transaction is send to the receiver while protecting agains play and replay. For day to day transactions, transactions should happen within one block, so experation of the kernel is no issue. Not sure about the block reorganusations if that would be an issue at all. Once a transaction is on a block and passed the miners checkpoint, the transaction will stay on a block right even in case of block reorganisations.

Edit: Ok, i get what you mean I think. In case of reorganisations, there could be two or more blocks with the same block height, in which case the transaction could be replayed.

This is a bit more subtle for reorganisation. As you have seen in the proposal, the calculation of the kernel offset for the very recent block changes compared to current grin. instead of just reading the kernel offsets in each transaction (for live verification of transaction) and then just using the aggregated kernel offset at each block once a block is mined to calculate the offset for the whole block, the proposal directly derives the kernel offset for each transaction thanks to sum(Hash(output.block_height||output.P)) - sum(Hash(input.block_height||input.P)) for the recent blocks. Then at some point, the verifiers just verify the kernel offsets like today by using the value of the aggregated offset in each block headers for historical blocks.

The problem is that if you do a reorg, you can go beyond that horizon when you calculate the offsets with the formula above, and then broadcast a new longest chain whereby you have not followed the rule implied by the formula, and instead have directly used offsets of your choice. so for that reason you could essentially do replay in reorgs, because it is the application of that formula to compute offsets that protect against play and replay.

Anyways, usually, during reorgs, nobody replay kernels; people are more worried to double spend their transaction in order to make real $$$.

So all this is a bit subtle. But the advantage of that proposal is that it protects against all play or replay attacks. Thank you for your interest in it ; )

1 Like

What stops a miner from replaying old transactions?

The threat model of this proposal, sort to speak, relies essentially on live validations of transaction. By authorizing the outputs that have block_height only contain between current_block_height -1 and current_block_height + 1, the live validators act a bit like a “funnel” on the transactions that are broadcasted to them.
But i think i understand what your mean, and if a miner replays an old kernels (with the associated old outputs) kernel at block n, then the live validators of block n+1 will see an issue with the kernel offset validation of block n (does not validate with the expected formula), and the block n would be thus rejected. Not saying there is not problems with the proposal, I had the idea but did not put much additional thought on it yet

Makes sense. Perhaps then it’s worth keeping track of Known Issues somewhere (in the original post perhaps?) in order to make it clear what areas require more work? It might also make it easier for others who are interested to contribute.

No problem, for now i only see an unknown with the reorgs larger than 2 blocks as I put on the OP. But there is no huge incentive to replay a kernel when doing a reorg. I will update OP if/when other issues are found.

Just found an issue on linkability. I updated the section Issues. If someone has an idea on how to fix, it is welcome