#grin2020 Roadmap - calling for blog posts with ideas

I’m starting to look into BLS now, and I don’t understand this claim. With the modified BLS scheme[1][2], you can also aggregate public keys if messages are the same. So, lets assume you have a block full of plain kernels (no lock heights). For simplicity, let’s assume each has the same fee. Then the message for those transactions are the same.

I’m notoriously bad at parsing and understanding math notation, but from what I can tell, key aggregation in BLS is still just the point addition of each public key. So you should be able to sum all kernel commitments to get the total kernel commitment for the block, and you should be able to aggregate the signatures to get the total signature for the block. And that, along with the inputs, outputs+BPs, and knowledge of the total fee, should be enough to verify the block.

I have a few ideas of how to handle different fees to get them all in the same form, but I want to make sure everything I’ve said is correct so far, because it certainly seems like we should be able to get rid of kernels entirely for standard transactions. I’m sure I’m just missing some important nuance though.

[1] https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html
[2] https://eprint.iacr.org/2018/483.pdf

I think I can answer my own question now. To prevent rogue key attacks, it seems like each public key must be known. So, assuming all plain kernels with the same fee, the best we could do is reduce the kernels to 33 bytes - about 1/3 the current size. Still a huge reduction, but probably can’t completely eliminate them.

1 Like

Correct; the public keys need to be both aggregated and hashed in sequence.

The recommended BLS curves are much larger in size than secp256k1 in order to safely accomodate the pairing; for instance 48 bytes for BLS12-381.


Thanks. So closer to a 50% size reduction at the cost of some additional computation. Not sure how much additional computation yet, but probably enough to make it not worthwhile unless we also get non-interaction.

I believe it still might be possible to get non-interactive transactions though. I’m working through the finer details now to make sure pruning is unaffected, and all data is appropriately committed to (non-malleable), and then I will share the design. :crossed_fingers:

1 Like

If a Grin transaction is built non-interactively without revealing information that could cause the security or privacy of the transacting parties to be compromised, for example by using ZEXE (Zero knowledge EXEcution), what is leaked/exposed?

In a keybase grincoin.teams.node_dev#research discussion, phyro wondered whether MW could have built in protection against rogue key attacks on kernel aggregation and I thought that since

sum of UTXO commitments ==
sum of kernel commitments + kerneloffset * G + (height+1) * 60e9 * H

the former being rogue-key-proof (because utxo range proofs are also proofs of knowledge of blinding factor) should make the latter rogue-key-proof as well.

So one may not need to store all the public excesses after all.
I hope some cryptographer like Andrew Poelstra or Dan Boneh can weigh in on this…


That would make it more interesting. It would be especially cool if we could also move the fee and lock height outside of the kernel. I’m thinking this might enable us to have the same message for all the commitment signatures and hence make it possible to use the “simpler test” (3) in [1] to verify?

With regards to moving fees and lock_height, maybe we could use the 20 bytes in the bulletproof to do some magic and put stuff there or commit to it? Another reason we might want to also research this option is that if the aggregation is indeed possible, having:

  • a single kernel public key
  • a single kernel signature
  • a single message

also improves privacy due to not leaking data of approx. number of transactions which right now is the number of kernel commitments in the block and without removing also the fees and lock_height, it would still be possible to derive the number of transactions from the “number of messages”. Imagine how cool it would be to make it impossible to measure the actual usage even when Grin grows.

[1] https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html#mjx-eqn-eqaggsame


What is the motivation behind storing the public excess in the first place?

Regardless, storage is more space efficient with validity proofs, as demonstrated by ZK Sync and Coda Protocol.

Computational re-execution in a replicated state machine makes no sense if it can be avoided. Only the state needs to be replicated for guaranteed safety and liveness.

Validation of each state transition is more efficient with a short recursive proof, with fast verification time. Proof generation is computationally heavy but only performed once per block. BLS signature verification replicated by every node will in aggregate consume more CPU/GPU in a decentralised network.

Am I understanding correctly that in your opinion recursive proofs like Halo make other validation mechanisms somewhat redundant?

I’m not sure that’s correct if one can aggregate both public key and signatures and the message that was signed was always the same, but I’m not a cryptographer so it’s best to wait for the opinion of others. I would love if you expanded a bit on this for those of us that are not that well trained in this.

Of course, will do my best to provide some context!

My first question was mostly directed to anyone involved with building the current design, or someone who have acquired that knowledge later on. Not sure who all of these wizards are!

In my following statement, i’m arguing that regardless if public excess storage is required or not, replacing signatures with validity proofs over a zkp friendly curve is still more space efficient.

The idea of replacing signatures with validity proofs have been successfully implemented by Coda Protocol. It is also the core component of ZK Sync, an account based implementation of ZK Rollup. I would say it is a proven concept by now.

Coda Protocol refers to their stateless client as the “blockchain”, which is a bit confusing. It is no longer clear what the definition of a blockchain is… A replicated state machine can’t be confused with a stateless client though, so it could be argued that it is a more meaningful term for referring to a blockchain?

As @lehnberg pointed out wrt this post, in summary “The ambition is to solidify Grin’s foundation throughout the course of 2020 and make it easier to build on in future years.”. What I’m trying to convey, is that a switch to a zkp friendly curve could have several potential benefits. Among these are:

  • Syncing time could be reduced to a few minutes, even with BTC’s UTXO set.

  • IBD could be as quick as a downloading a torrent file containing outputs without signatures.

  • Output linkability could potentially be solved with the “ZK ZK Rollup” concept.

  • New ways of doing non-interactive transaction building would also be possible.

1 Like

For a more in-depth read, check this paper where Dan Boneh and the inventor of the Roll_up, Barry Whitehat, formalizes further advancements in this area.

“This reduces the on-chain work to simply verifying a succinct proof that transaction processing was done correctly. In practice, verifiable outsourcing of state updates is done by updating the leaves of a Merkle tree, recomputing the resulting Merkle root, and proving using a SNARK that the state update was done correctly.”


I prefer the idea of providing validity proofs on top of the current kernel history, perhaps as some 3rd party service. Clients then have the option of doing the current full verification, or to bypass most of it with a recent validity proof.


This keeps our core consensus layer relatively simple and limits the scope of bugs.

It maintains compatibility with hundreds of other secp256k1 based blockchains, allowing for atomic swaps based on adaptor signatures, and integration in decentralized exchanges.

As better and better zkp schemes are developed in the coming years, we can easily update the optional proofs without risk of consensus bugs.


The current curve and the blake2b hashing function are zkp unfriendly, so the added-on proofs will be very expensive to compute. That will reduce their frequency to perhaps once a week or once a month, and will require some beefy hardware to run on.


This one is false.

Compatibility with other blockchains is not inherently curve bound. This assumption is wrong.

If a specific chain don’t have a suitable precompile for interoperability, a third chain can be used as an intermediary for atomic swaps.

Rollup is already being implemented by several decentralised exchanges. Validity proofs solves critical problems with traditional atomic swaps and allows for more flexible inter-blockchain-communication (IBC), as well as improved privacy and scalability.

While I agree with the conservative stance wrt changing consensus and limiting the scope of bugs, I think it is important to challenge and verify all claims of advantages and disadvantages in this discussion.

If Bulletproofs can be used as a non-recursive validity proof with cut-through on secp256k1, that would be really cool.

Grin is not well integrated with the larger blockchain ecosystem, as it stands today. I think it would be a mistake to hold on to misguided assumptions about interoperability.

Not sure if @tromp is claiming that cross curve adaptor sig swaps are not possible, or that atomic swaps are not possible without persisting signatures on-chain. Either way, both statements are incorrect.

Atomic swaps based on time-locks are slow and suffers from “the free option problem”. Decentralised exchanges with universal interoperability, like RenVM, are more likely to attract liquidity.

Cross-chain Rollup with in-/validity proofs is also a possibility that would enable improved interoperability.

I think they’re highly non-trivial at best, judging from these still incomplete attempts by Monero:


So the way I see it, it seems like grins very well integrated, probably actually the most well integrated coin. Consider the following…

I think bitcoin will have some logic added to it soon in the schoor/taproot update. I think it will allow for logic like: If ‘X’ occurs, Then send ‘Y amount of bitcoin to Z address’.

So in terms of swaps, Bob says 'X amnt bitcoins will be sent to address A if this Y amnt of grin are sent to address B. Alice could encrypt a grin transaction file for the amount requested and broadcast that it’s being sent to address B. If certain conditions are met, the encrypted transaction file could be unlocked with a key allowing bob to finalize and receive his grin. The release of that encryption key to bob so he can finalize and actually receive his grin could be dependent on if the bitcoin were confirmed to be sent to Bob’s address A.

That’s why having a coin where the receiver has to agree to receive is a feature not a bug. So not only is grin well integrated, it’s actually going be integral to the whole ecosystem.

1 Like

Atomic swaps between Grin and other secp256k1 coins would use adapter signatures as explained in

and which Jasper vd Maarel demonstrated on testnet.


TBH I’m trying to understand what a discrete logarithm is (please don’t explain). I’ll figure it out eventually.

BLS aggregation helps with storage but it seems to suffer during verification, see “Pairing is not so efficient” in https://medium.com/cryptoadvance/bls-signatures-better-than-schnorr-5a7fe30ea716