Mimblewimble CoinSwap proposal

Yeah, we have 3 options:

  1. Have offset be 0
  2. Have one node set offset
  3. A mixnode picks a random scalar k and signs k*G kernel and then adds sum(private_excess_i) - k to offset. This requires passing also the offset backwards though.
3 Likes

@phyro thank you for the reply and clarification.

I was making a bad assumption re: routing. My concern would be without a dynamic routing, it only takes one node to be unavailable and all mixing would stop? The risk would be the mixing could be completely disrupted by dosā€™ing only one server? I know this adds a ton of complexity to address though so Iā€™m not sure how much (if at all) it is worth talking about now.

@scilio thank you very much for the explanation, it is clear now. Iā€™ll leave a proper review in your PR in the next few days but on first pass it looks like you accomplished everything set out in your first milestone, nicely done, glad to have you here.

9 Likes

Youā€™re correct. If one node is not available, it becomes impossible to peel the onion and the process comes to a halt.

I like this option. It resembles what both sender and receiver do in late locking; picking a random kernel and setting/adjusting their offset accordingly.

3 Likes

Iā€™m not sure if we have mentioned this before or not so Iā€™ll leave it here just in case. This protocol can also be done interactively for people that want a trustless coinswap. Let k be the number of people that want to perform the coinswap. Our coinswap will have k mixnodes and the parties will randomly order themselves to determine in which position of the mixnode network as well as pick and share an ephemeral public key that will be used to create an onion encryption. Since every mixnode obfuscates the links to the next set of outputs through their shares, by participating in such a protocol, you can be sure that the input/output linkage is completely blinded because you wonā€™t tell anyone the links. The scale of the mixing is reduced because of the interactive requirement and the reorg scenario can hurt quite a bit because itā€™s unlikely it will be possible to recreate the transactions (youā€™d need to keep all the mixnodes online), but the bloat is still minimal because even if the transactions are smaller e.g. 10 people coinswapping, they can contribute multiple 1-1 coinswap and more importantly, they can use a single kernel instead of k kernels. This is because you know the coinswap canā€™t be reversed since youā€™ve been one of the mixnodes. The fact that they can use a single kernel in this case is beautiful, a coinswap between 10 people is indifferent from a self-coinswap of 10 inputs (also indistinguishable from a regular transaction and a CoinJoin).

This can be a fallback to a variant of a decentralized version of the protocol that has very similar properties to the noninteractive coinswap.

1 Like

Cool! Having only one kernel would be great to minimize the long term bloat to the chain. What would be the downsides appart from smaller scale due to interactivity? Would this type of coinswap be technically more complicated to inplement, do participants know which inpits are from who or not since they add their inputs in sequence? Proā€™s Cons list pleasešŸ™.

What I described is almost the same protocol but in an interactive setting. I never thought about it much, but Iā€™m pretty sure you could blind also which inputs are from whom. The downside is clearly a smaller anonymity set which can be annoying if there is a resourceful attacker who can keep participating in these rounds. And the mentioned reorg cases (though that depends on the anonymity set size and frequency). I prefer a global service as described in this topic for these reasons. Even if the proposal that Tromp described uses k kernels (for safety reasons), itā€™s still less bloat than the interactive version with 1 kernel because it allows more people to join in 1 coinswap round. Iā€™m mostly just writing things down just to keep the ideas in one place when new people come around.

2 Likes

I just realized thereā€™s an even more obvious solution and I believe a better one. Mixnodes add their partial excess when going forward and they sign the total excess and adjust the offset when going backwards. Why is this better? Because we have a single kernel with the same guarantee as with M kernels (no undo attack if 1 is honest), but more importantly, it makes the tx indistinguishable from other N-N transactions with 1 kernel. An observer canā€™t know if it was a coinswap or not (I expect many different coinswap services around, not just the daily one). Even more so, it doesnā€™t leak any data as to how many mixnodes were involved which is especially important when the users themselves are a mixnode. On top of that, it allows a user to create N-N tx which contains self spends all from the same user and nobody can tell if itā€™s a self spend or coinswap.

8 Likes

This is a good idea. I donā€™t think it will stop the coinswap from standing out, but it does save a few kernels, which is always good.

To make this work, each mixnode should generate both a random private excess and a random private nonce, and add the corresponding public excess and nonce to the cumulative versions that are passed forward. The final mixnode can then use the cumulated totals as public key and nonce for the single kernel. On the way back, each mixnode adds its input and output for fee collection, and adjusts the coinswap offset to compensate for its private excess, the valid x_i,j it added to commitments, and this added fee input/output.

1 Like

Just wondering how the kernel aggregation would effect scalability on the long term.
For each value spend and for each blinding factor (private-key) involved a rangeproof is needed, or are these aggregated as wellt?

So what happens when you aggregate many transactions in one kernel. Are the rangeproofs combined? Does this means you cannot discard this kernel with its range-proofs easily since the chance of all outputs being spend is very small? Or is space actually saved since a single rangeproof can be used for all outputs spend.

Just trying to wrap my head around it.

No, nothing to do with blinding factors. One rangeproof per output which cannot be aggregated.

We can never discard kernels.

Yes, 1 kernel instead of n.

1 Like

It slipped my mind but rangeproofs are not part of the kernel. So having a single kernel is nice for space saving although the major bloat is the range proofs which luckily can be discarded once an output is spend.

In the documentation it was mentioned a range-proof was needed for the blinding factor, see below. Maybe I misinterpret it.

Itā€™s also important to note that range proofs for both the blinding factor and the values are needed. The reason for this is that it prevents a censoring attack where a third party would be able to lock UTXOs without knowing their private keys by creating a transaction ā€¦

That looks poorly formulated. The range proof is needed to show that the value is in [0,2^64). Knowledge of the blinding factor is needed to construct a rangeproof.

1 Like

One last question about this CoinSwap proposal. Does it involve aggregating the bulletproofs/range-proof in a single proof using MPC?

No, it cannot, as that would require interaction between all m daily users, and even then would likely fail with just one malicious user.

In the case of your proposal, the one @scilio is working on, it does involve MPC right?
I would think it to be possible, maybe not on all levels, but in a tree like structure, small CoinSwap nodes join range proofs, which again are joined up stream by bigger nodes, etc. In this way a malicious attacker could only block a small branch. Or does the aggregation not work like this, does it require interaction between literally all participants?

The coinswap protocol is MPC among the mixnodes, not among the m users, who just submit once and donā€™t interact further.

But rangeproofs cannot be aggregated even in principle. Each output needs its own separate rangeproof.

I think what I described is not safe from key cancelling as discussed in the safe slates. You canā€™t skip a party due to onion encryption, but I think node at position 1 and N collude by node N cancelling all the partial excess keys except from the node 1 and then node 1 ignoring all other partial sigs and adding only their partial signature to seal the transaction (which I think would validate). Would either need memo protection or a safe multiparty kernel building.

What do you mean by this? What is achieved by the collusion?

Since node 1 is the only one whose partial sig is used in the end (all other partial excess and nonce values are cancelled out because node N uses the sum of their negations), it means that node1 knows the full private key for the kernel and could do the undo attack. This is nothing critical of course, funds are safe because theyā€™re self-spends, but it does bring the model slightly closer to the one where just node 1 produces the final kernel and could perform the undo-attack.