Unique kernels without transaction expiry

Following the hot discussions on play and replay attacks, I did a small paper which summarizes my technical investigation and understanding of the concept of unique kernels. I hope that the document can help people in the community interested and open to the different solutions that we have in hand, to comprehend more the properties of unique kernels and their usefulness to fix the vulnerabilities. I have in particular introduced in the doc, as the title suggests, a quite scalable and simple proposal that allows to check uniqueness of the kernels without the need to enforce expiry on them.

The paper contains:

  • Link of kernel uniqueness with the available fixes for play attacks

  • Description of the kernel expiry proposal

  • Security study of the kernel expiry proposal, including the DOS attack vectors that may be introduced by it

  • Presentation of the proposal without kernel expiry, whose main ideas are based upon the kernel expiry construction

  • Pseudo code of an algorithm checking kernel uniqueness in the non expiring proposal

  • Pros and cons of the proposals

The document might be considered a bit long to read, in particular for community members that have not yet studied Mimblewimble closely, but I thought it was important to give the most details as possible. And I hope the content is mostly clear at least. Feedbacks or questions are welcomed.

pdf available at keybase://public/kurt3/kerun.pdf


@Kurt Thanks for writing this up. This looks to be a very comprehensive overview of what would be required to support unique kernels.

Am I right in thinking that the idea here is to effectively “cache” and index recent history, allowing fast uniqueness checks against a window of recent kernels?
And then if no duplicate was found, only then fall back to looking through the full history?

Recent history allows you to verify the existence of a duplicate (similar to what we do for NRD) but cannot verify non-existence of a duplicate kernel. So for every lookup in the recent history that did not find a duplicate you still need to go back a look over the full kernel history?

I’ll describe the case of the algorithm of the kernel expiry proposal since it is an easier algorithm and it is useful to understand that one first before digging into the algorithm for the proposal with non expiring kernels.

So to begin with, my algorithms allows you to be sure that no two same kernels are on the blockchain, not only on the recent history.

This description is a fair description of the algorithm for kernel expiry yes. Only needs to check the recent kernels on a - dynamically moving window - of size (relative_max + 1) blocks. This is because two signatures (on the excesses in mimblewimble) with two different messages are always different.

And it turns out that, if a kernel is in the MMR, you can of course guess its signature’s message, by doing the operation inclusion_block_height - relative height (=message of the signature, this is a consensus rule). This integer should be the message of the signature of that kernel. But due to the existence of the consensus parameter relative_max, a kernel with a given signature with message x (whatever integer it is) can only be included in the blockchain between block height x and block height x + relative_max.

Recent history is sufficient for the proposal with expiring kernels, and is almost sufficient for the case of the proposal with non expiring kernels. For more details, one needs to understand well section 3 first, then can read section 4.

I hope it answers to your question.

Yes it can

Oh I think I get it now.

You are proposing -

  • use absolute lock heights on all kernels
    • but optimize how we store these
  • require uniqueness of kernel by (excess, signature), not just excess
    • (this is the part I was missing, signature differs because absolute height differs)

Kernel uniqueness could be verified on signatures yes.

A Schmorr signature is a couple (s,R), where
s = r + Hash(signature_block_height, R).x
where the total excess is X = x.G
R is the public nonce. R = r_1G + r_2G = rG.

Better to verify uniqueness on s. s is not possible to replay if you dont use the same signature’s message due to Hash collision resistance of sha256. So that would be the way to go.

With the disclaimer of me not knowing all the specific technical specs I can tell this is without a doubt a seriously well written proposal. One that stands above and beyond the rest. Much respect for this work and commitment Kurt :muscle:.

Now can we live stream the emergency consultation of the council?

1 Like

Yes to all of the above, this is a great addition to the discussion, kudos to @Kurt for taking the time to put it together.

Fortunately, there doesn’t seem to be an emergency. It’s a shame we didn’t have this doc when when the topic was discussed in Tuesday’s dev meeting, would have been really useful.

But since we have it now, let’s add to the agenda for next meeting. By then there might have been more feedback to the proposal, and we could have a more informed discussion than what we had Tuesday. Hopefully @Kurt will be able to join as well.

All in all, a positive development!


I did this doc yesterday, and finished it up today.

I did a 5 page doc in Google doc from John Davies on the same topic, available since July 18, that was suggested by Quentin and John put it publicly on keybase, but apparently nobody looked at it.

Also, 7 days ago, I said on the forum that a solution exists without kernel expiry (this is in the thread where I present the construction for kernel expiries). In that post, I also details how the algorithm can work. no reaction at all of this post, if not a community member that told me more or less to shut up and to go with beam because it was painful for him to see my posts.

1 Like

I did this doc yesterday, and finished it up today.

That’s a great accomplishment. All I meant to say was that it would have been good to have had it. It’s good we have it now.

I did a 5 page doc in Google doc from John Davies on the same topic, available since July 18, that was suggested by Quentin and John put it publicly on keybase, but apparently nobody looked at it.

I think people did, it’s a link to it in the meeting notes.

1 Like

i feel your efforts man. :+1: :+1:

1 Like

Thank you. Unfortunately, and it is not a complaint as i was also free to leave and silently abandon, as many people do in communities, but i chose to persist. The problem for me was not having a disagreement itself, but was having a disagreeement on a quite critical vulnerability problem, and to battle during 2 months against proposals that do quite unsecured and non-user friendly wallet tweaks. Without counting the number of times my point were ignored, or not reported correctly in meeting. I dont have special animosity but how much time would we have gained if everyone started from the assumption that we must take care of security and UX and work towards that.
Not saying that I have and had the perfect solution but replay and play attacks are tricky and I mentioned multiple times that wallet fixes do not provide correct security, and sometimes privacy. Not only me, many other people contributed and I learned a lot, but I became also a bit alienated by these debates and the whereabouts. So now I choose to only be in the forum and I dont see myself going to meetings right now.
I have not been perfect, I did my best, sometimes I was aggressive because there is no need the other person to be aggressive to feel a disrespect or a misconsideration, or even sometimes the way too frequent use of authority arguments in technical discussions that are tricky enough that we all should strive to be rigorous about our sayings.
But yeah, I probably have my fault also, it is difficult sometimes to have an objective perspective on the events, but for now, I just want to talk technical and I am not interested in the governance and decision process right now. I think people know my opinions and my technical points as I have repeated them sometimes a lot of times. I understand that there may have disagreements.
Anyways there are many smart people that have been involved and are still involved in this debate, I am sure they want Grin to succeed, some of them do the decisions (its OK we don’t use voting processes so at some point there must be someone who deliberates).
So I dont complain, and sorry if I did anything bad, but for now extended governance holidays and focus on technical discussions as it can be the case in the forum.

1 Like

For what it’s worth, I just read through the document in detail, and it reads much clearer than these long winded forum posts debates that we’ve been seeing.*

Thank you again for writing it, and I personally now understand your proposal much better. The structure and format was really helpful. For example, it wasn’t clear to me until now that you had two proposals for replay attacks, one with kernel expiry and one without.

From the paper, I draw the conclusion that you prefer the non-expiring proposal, as it doesn’t have the DOS vector. Is that right?

Is the requirement around allowing Duplicate Outputs (DO) only to counter Play attacks, or is it needed for replay attacks as well? What are the downsides with allowing duplicate outputs in the mempool and elsewhere?

[*] that I wouldn’t want this thread to end up in either, so let’s try to stay on topic!

It is someone who I spoke to in dm two days ago about my proposal that told me about the second DOS vulnerability in case using kernek expiry. so kudos to him. Not sure if it was known by other people? (the first one was known and we knew how to properly fix it)
I knew it was kind of possible to remove kernel expiry, and was convinced of it 7 days ago, where I explicitely did a post on it outlining the idea of the algorithm. But after the discussion 2 days ago I chosed to include the non expiry proposal in the pdf, despite nobody reacted to it on the forum.

I dont know yet if the second DOS vulnerability for kernel expiry proposal is fixable, David thinks yes. Would be curious to know what people think on this one

All replay attacks are fixable with kernel uniqueness. Also fixable with the original Phyro proposal with PayJoin.

Allowing duplicate outputs is for Play attacks only

This is quite a write up! I don’t have time today, but will take the time tomorrow to go through the ‘no kernel expiry’ idea :+1:

1 Like

Yes, we brought it up in the dev meeting. Meeting notes: https://github.com/mimblewimble/grin-pm/blob/master/notes/20200721-meeting-development.md#55--expiring-kernels-ek

There are two different ones. The first one was known and a fix was known. The second one is a bit more sneaky but requires “full blocks”

Scalability: Around 99% of the time, it is useful to note that inclusion block height = signature block height

This is unlikely to be true given inherent latency in transaction relay, specifically the stem phase during Dandelion. Transactions tend to be included several blocks later than their creation time.

One important thing to add here is this would impact transaction verification in some important ways.

We can currently validate the kernel signature in isolation.
i.e. Signatures can be validated without reference to any on-chain data. The message being signed is included fully in the kernel itself.

The approach here introduces a dependency on external data.
Verifiers must derive signature height based on inclusion height which in turn relies on both the block header and the kernel MMR. For historical kernels we need to determine which block the headers correspond to based on kernel_mmr_size of the header.

This can be done, but it does affect the simplicity (and performance) of the current kernel signature validation, where we simply batch validate signatures based on the associated kernel data.

Did you read the document ? :slight_smile:

You have not understood yet what is the transaction process with my proposal. I explicitely say that signature_block_height (the message of the signature) stays with the transaction up to inclusion of the kernel in the blockchain (it also stays with transaction in mempool). As a consequence there is no issue for live verifiers to verify the transaction when it travels in the network since they have direct access to the message before it is included in a block.

Please read the document first as it is explictely explained in the beginning of section 3 where I start describing the protocol.

Don’t forget about historical validators though. Not everyone sees the transactions themselves.