We’d like to introduce VRF-based mining, a surprisingly simple and effective way of making pooled mining impossible. Instead of using hash functions, we use Verifiable Random Functions (VRFs) for proof-of-work-based consensus. As VRF binds the authorship with hashes, a pool operator should reveal his private key to outsource the mining process to miners, and miners can easily steal cryptocurrency in the pool operator’s wallet anonymously.
This technique is originally designed for conventional blockchains. For MimbleWimble-based blockchains without address, the binding can be achieved by using the private key holding the coinbase transaction to execute the VRF.
This idea is co-developed by Runchao Han (me, runchao.han@monash.edu) and Haoyu Lin (chris.haoyul@gmail.com). We thank Cheng Wang, Omer Shlomovits and Jiangshan Yu for their valuable feedback.
It’s true that we can enforce the signature of the header to meet the difficulty requirement. The problem is that commonly used signature algorithms (e.g., ECDSA) are not deterministic. This might give further search space for PoW and may render the mining algorithm useless. Please find a more detailed analysis here https://hackmd.io/@ZcwjuAe3RUCFVPrXtvriPQ/S1YM1KZWI#Related-work
That’s not a problem; that’s a convenience. It means that the header doesn’t need a separate nonce field. It can just use the fact that (nondeterministic) signatures have their own choice of nonce.
How on earth does making mining==signing make it useless?
Let’s first consider the scenario with both nonce and the randomisation parameter in the signature algorithm (say k in ECDSA). Consider a blockchain adopting a memory-hard mining algorithm. The miner can just search for a valid k (rather than searching for a valid nonce) to meet the difficulty requirement. As signing is usually not memory-hard, mining is no longer memory-hard.
Then let’s consider the scenario without nonce, but just the randomisation parameter in the signature algorithm. This is essentially equivalent to VRF-based mining, but in this way mining can only be done by signing, rather than existing mining algorithms. This limits the design space of mining algorithms.
Also, the non-deterministic signature algorithms relies on the randomisation to be secure. I’m not sure whether using it as nonce will affect the signature’s security or not…
Ah, you want to be both non-outsourceable and memory hard.
Equihash is not a hash function. The others are themselves outsourceable.
What do you mean by much heavier? Use more time / memory? How does that make outsourcing more IO intensive?
Btw, both Ethash and CryptoNight suffer from slow verification, an undesirable property for a PoW.
I think non-outsourceability can be added to instantly verifiable memory hard PoWs like Cuckoo Cycle and Equihash by appropriate tweaking of their seeding process.
Equihash is not a hash function. The others are themselves outsourceable.
Oh you are right on Equihash. Thanks for pointing out. I will update the article later.
Indeed the others are themselves outsourceable, but they can be embedded in $H_1$ or $H_2$ (in VRFs).
What do you mean by much heavier? Use more time / memory? How does that make outsourcing more IO intensive?
Yeah I mean more time-consuming. Sorry for the ambiguity. I will update the article later.
The reason of I/O intensive is as follows. Let’s use outsourcing $H_1$ as an example. Given a nonce, to outsource the computation, the pool operator should first send the nonce to a miner, then the miner computes $H_1$, then sends back the output of $H_1$. In other words, each nonce requires a roundtrip between the pool operator and a miner. This is extremely I/O intensive.
Btw, both Ethash and CryptoNight suffer from slow verification, an undesirable property for a PoW.
Verifying the nonce might not be the performance hotspot of verifying a block. Verifying a block only requires a single execution of the hash function, which is much faster compared to other verifications such as verifying transactions.
I think non-outsourceability can be added to instantly verifiable memory hard PoWs like Cuckoo Cycle and Equihash by appropriate tweaking of their seeding process.
Good idea. I will think of how to make VRF-based mining compatible with Cuckoo Cycle and Equihash.
It is if H_1 takes under a millisecond to compute. But if you make it more time consuming, then outsourcing starts paying off. In other words, non-outsourceability puts a limit on how heavy you can make H_1.
Blocks are often not even downloaded until a chain of headers claiming more cumulative work have had their PoW verified. If verification is too expensive then this becomes a DoS attack vector.
For instance, in Cuckaroo’s sipblock () function
one could initialize the siphash_state<> shs with a signature of both keys and edge0.
Then if a solution cycle is found, the relevant PROOFSIZE signatures are added to the solution. The miner thus needs to compute NEDGES / EDGE_BLOCK_SIZE signatures for every graph, and EDGE_BLOCK_SIZE can be chosen to provide the ideal trade-off between making outsourcing too IO intensive and keeping the PoW more memory bound than signature computation bound.
Yes, H_1 should not be too slow. But this requirement should be easy to satisfy. First, I/O operations are much slower than local computations (including both computation-bound and memory-bound). Second, such high frequent high I/O operations will lead to high expense of maintaining a mining pool, and will further make maintaining a mining pool no longer profitable or lead to high pool fee rate.
Agree. This is indeed a DoS attack vector.
This seems to be still outsourceable. What if the pool operator signs shs and edge0, then distributes them to miners?
Alternatively, what if we replace hash24 with a lightweight VRF? As long as the VRF is not the performance hotspot, Cuckoo remains memory-bound.
That’s too IO intensive, as there are NEDGES / EDGE_BLOCK_SIZE different edge0 (one per edge block). E.g. with 2^29 edges and block size of 64, there are 8 million signatures.
It will be a performance hotspot though, when doing a signature at every single edge. That’s why I propose doing it only for every so many edges.
(the other aspect of edge blocking, to make it sequential over all edges in a block, is not needed here, and better left out as it slows down verification).