Could you please make a list of the drawbacks of implementing coinswap ?
and also , How will Grin privacy compare to monero with coinswap ?
and finally , will it get implemented for real or is it just an idea that will never be implemented like privacy for bitcoin ?
Requires finding reputable parties. enjoying widespread trust in Grin community, willing to run the mixnodes. Perhaps even multiple sets that users can select from through wallet configuration.
If implemented and enabled by default in most wallets, and having at least a few dozen genuine tx mixed per day, then the privacy is comparable to Monero.
It’s too good not to get implemented. But with current lack of developers, it will take time.
Is there already a bounty for developing a CoinSwap mix node? If not, why not put a bounty on it, who knows a developer might bite.
There is nothing to lose with putting a bounty on as far as I can see.
thanks. Does it mean that when one sends a tx , it won’t be confirmed for several hours potentially (compared to now) , waiting the coinswap to occur ?
Before implementing it some famous cryptographers would need to verify it works as expected, i think a bounty for that might be better unless someone can get someone to do it for free
Obviously, a daily mixer could induce waiting periods up to a day, while an hourly mixer (which only makes sense if a daily mixer gets lots of traffic) would limit that to an hour.
But if a wallet cannot wait, it can just go ahead and double spend outputs sent to be swapped. the coinswap will automatically filter those out.
This paper contains a nice summary of privacy techniques that can be used on Bitcoin and their tradeoffs
I think the entries for Mimblewimble CoinSwap would be something like this
Anonymity set | Large Unlinkability | ✓ (User generates new commitment for each coinswap) Untraceability | ✓ (If CoinParty has untraceability point, we should as well since it is a similar model) Value privacy | ✓ (CT) Theft resistance | ✓ (Impossible to steal value. Undo attack gives it back to the same user) DoS resistance | ✓ (Malicious users cannot DoS through invalid coinswaps because we can filter out the invalid ones - they consider centralized services DoS resistant as none of the participants can abort the protocol and affect others). Sybil resistance | ✓ (Every coinswap costs X fees and MWixnet waits for N hours before running the protocol so everyone can participate) No interaction with input users | ✓ No interaction with recipient | ✓ (self-interaction) BTC Compatible | X Direct send to recipient | ✓ no. Trx | 1 Min Block | 1
I’d also add something that is important in case of reorgs when we have transactions with big anonymity sets.
Reorg friendly | ✓
Reorg friendly means that given a transaction with 1000 inputs and 1000 outputs where one of the inputs was double-spent in a reorg, we would be able to replay the coinswap rounds and reconstruct a new coinswap shuffle transaction with 999 inputs and 999 outputs. This prevents future graph collapsing due to transactions using commitments that no longer exist and localize the issue to the subgraph going from the double-spent commitment forward.
Hopefully I can help. Request for Funding @scilio (CoinSwap Implementation)
I’d like to mention that there’s a way to blind the coinswap mixnodes as well if people really wanted to. Alice and Bob can produce two 1-1 coinswaps with the same value whose ending outputs are multisig outputs that start the “fund” for each party. The mixnodes even when colluding will think they know the graph links, but the next transaction can be Alice claiming the output from Bob’s multi-output produces which makes Bob learn the secret to Alice’s multi-output. This is basically a bitcoin like coinswap with only 2 kernels because the funding transactions come for free from the coinswap transaction.
To produce a multi-output, given N mixnodes, Alice picks the shares for mixnodes at odd position and encrypts them while Bob picks the even positioned shares. Not sure the relative refund txs can be done though due to a lack of kernel, perhaps only height locked refund txs.
The CoinSwap service could be used more dynamically.
Suppose Alice owns 100 outputs and the coinswap service outputs a transaction of 800 inputs on average. She can create a slightly larger coinswap message where the first node gets a set of inputs (with a single rangeproof for all of them that serves as a proof of ownership). The first mixnode of doing
C + x_0*G it first does
C = sum(C) and only then proceeds with
C + x_0*G. This makes the computation of the next output as if the first node got a single input and it changes nothing. If that output was valid, all of the inputs are added to the resulting transaction set. If at least one node is honest, nobody knows where they consolidated, but the first node will know the links of the inputs.
So in theory, you could have not just
1-1 self-spend but also
N-1 (N inputs, 1 output). Consolidating in a large mixed set where only the first node knows (and whoever they tell) is better than consolidating in a single transaction where everyone knows which ones were consolidated.
Not sure if it’s worth supporting multiple inputs in a coinswap. It’s definitely no suitable for the first version.
Similarly, you could support
1-M self-spends. You sum have a single input while you’re going through the mwixnet and then create M outputs in the last message - though this gives information to the last mixnode that they’re owned by the same party. On top of that, it would require more rangeproofs which makes the messages larger. Why would anyone want to do a
1-M self-spend? Consider a scenario where the coinswap service has only 10 coinswaps because it’s not used. You can put a 1-20 self-spend and you’ve increased your anonymity set from 10 to 30. One of the 10 coinswappers inflated the output set and it’s 1/10 probability that it was you (assuming only one did it).
Note: this would require variable size messages to avoid correlation with message size.
Similarly like the ones above, you could do a
M-N transaction e.g. a 2-2 payjoin. There are two major downsides to this:
- the undo attack is possible if all the mixnodes collude which could give money back to the sender
- no payment proofs
If the mixnodes don’t collude, they can’t know There’s no reason for the mixnodes to perform the undo attack unless they were bribed to do so. But in the case where you’re transacting with people you trust, you could in theory be sending regular transactions if it supported
M-N self-spends because it’s impossible to know if it was a self-spend. This also means that you could silently throw many transactions in it without leaving a kernel. This is probably not a good idea, but it’s worth writing things down.
History has taught us that optional privacy doesn’t work. Unless this protocol is somehow made compulsory (e.g. by requiring that a transaction must have inputs that have been recently mixed in order to be relayed), then the sole fact that a transaction is using mixed inputs can be considered suspicious and may cause the transaction to be a target for censorship by miners or the coins involved to be labeled as ‘tainted’ and frozen by exchanges.
If most of the users send all of their newly created outputs to the coinswap service (e.g. default behavior in the wallet is to send), then the majority of the outputs would likely be coinswapped. This makes the probability of a coinswapped output being involved in a malicious activity very unlikely (thumb estimation though). I agree that the best case scenario would be everyone using this without an option to opt-out, but I think that moving towards more and more fair users using it is also a good direction. We welcome improvements to the protocol (or a completely new attempt).
if majority of users would use coinswap or mixers,there wont be any tainted coins. Exchanges and others would bow. And you would taint those exchanges and others.
'‘When everybody is guilty,nobody is guilty’
I’m going to attempt to restate my understanding of the protocol as it relates to a possible implementation to help my small brain before I get into the code.
Please correct me if I have anything critical incorrect or missing. I’ll do my best to not ask any questions already answered but no promises.
At some point soon I would like us to get a completed ‘specification.txt’ or ‘protocol.txt’ included in the mwixnet github repo in /docs/ or something to make all of this easier to assess, iterate on etc. (though @tromp may want to keep two docs- one for the pure high-level protocol and one for the specification against which the implementation should be assessed). It might be nice to have a @grincoin#mwixnet channel for dedicated discussion in keybase too.
Steps below assume a 3-node route and focus on 1 spend with some generalization (with an attempt to use simple language):
For each transaction, the owner of the spend provides and generates the following:
- 1 valid UTXO present onchain
- 1 random excess for each mixnode used (if 3 mixnodes, 3 random excesses)
- 1 proof of ownership of the UTXO to be used as the input (exact strategy not explicitly stated yet in the spec?)
- 1 range proof on UTXO/input commitment + sum of all random excess commitments generated for each node (i.e. final output)
From a fixed set of known mixnodes, the owner determines an onion routing path through the nodes (see question(s) at the end about dynamic vs fixed routing)
Once the nodes and path are selected by the owner (if dynamic), owner constructs and encrypts payloads to be routed
Note detailed key derivation etc is ignored here for simplicity
For each encrypted payload, a plain text commitment is included
- The first commitment is provided by owner, subsequent commitments are provided by the previous node in the path
The first payload is built and encrypted to a key of the first node
- commitment = UTXO/input commitment
- p1 = enc1(excess1, proof of ownership of UTXO/input commitment)
The second payload is encrypted to a key of the second node, then encrypted again to a key of the first node
- p2 = enc1(enc2(excess2))
The final payload is encrypted to a key of the final node, then encrypted again with previous keys of all previous nodes in reverse order
- p3 = enc1(enc2(enc3(excess3, range proof)))
- The range proof should be valid on original UTXO commitment + sum of all random excess commitments (i.e. final output)
The owner submits data1 to the first node they selected, node1
- p1, p2, p3
- node1 decrypts all payloads, removing enc1 encryption layer
- node1 verifies that UTXO is unique, present onchain and that proof of ownership is valid
- node1 includes the modified commitment to create necessary data to send to node2
- sum of UTXO/input commitment and excess1
- enc2(enc3(excess3, range proof))
- The first node submits the above data2 to node2
node2 decrypts all payloads, removing enc2 encryption layer
- node2 derives new commitment by adding excess2 to the provided commitment by previous node
- node2 includes the modified commitment to create necessary data to send to the final node, node3
- sum of previous commitment and excess2
- enc3(excess3, range proof)
- The second node submits the above data3 to node3
node3 decrypts the remaining payload, removing enc3 encryption layer
- node3 builds the final output for the spend by summing the previous commitment with provided excess3
- node3 verifies that the range proof is valid for the updated final output commitment
node3 computes the first kernel with valid output(s) and commitment(s) with an excess of the sum of random excess(es)…
I’m still trying to fully wrap my head around the kernel aggregation section.
Does the protocol require that for a given mix, all spends must use the same routing order for j nodes, such that node n is always last node for each spend as well as node 1 always the first node for each spend?
It seems that if each node is supposed to produce one kernel per coinswap, the final commitments (outputs) need to all arrive at the same node for the first kernel to be built?
Though it seems simple enough and the invariant is straightforward, I’m still trying to work out the exact steps for how exactly the kernels would be derived based on the notation. Do we assume that routes for all spends are the same and fixed per coinswap and that all nodes j are participating and have all relevant data?
Really glad you took the time to go through @joltz .
Yes, the protocol assumes the order of nodes is static. This is because the first and last nodes are special. The first filters inputs that dont exist and is an “entry point” for a coinswap while the last filters out invalid coinswaps e.g. if it doesnt produce a valid commitment with a rangeproof or if commitments clash between multiple coinswaps or similar. We only need 1 mix node to be honest so it doesnt hurt to have a static order.
Yes, you need to first know which coinswaps are valid so that the mixnode can “safely” sum up the private excess of these coinswaps and produce their kernel from them.
Yes, the route is static because we only need 1 mixnode to be fair. Changing the order doesn’t improve the blinding of graph so we make it simpler by having the same route e.g. the coinswap service can provide the public keys for the path it takes.
The path is always the same. It’s just onion encryption, not onion routing. I don’t believe we would really gain anything by adding onion routing.
All filtering of outputs takes place on the trip from node1 to node3. node3 builds its kernel (kern3) from the sum of excesses for all included outputs.
Let’s step through an example with the 3 nodes and 4 outputs used in John’s description.
A. node1 has:
B. node1 calculates:
- C1_0 + x1_1 = C1_1
- C2_0 + x2_1 = C2_1
- C3_0 + x3_1 = C3_1
- C4_0 + x4_1 = C4_1
C. then passes them to node2 (sorting first). Let’s assume after sorting node2 gets:
D. node2 calculates:
- C2_1 + x2_2 = C2_2
- C4_1 + x4_2 = C4_2
- C1_1 + x1_2 = C1_2
- C3_1 + x3_2 = C3_2
E. then passes them to node3 (sorting first). Assume after sorting node3 gets:
F. node3 calculates the final outputs:
- C3_2 + x3_3 = C3_3
- C4_2 + x4_3 = C4_3
- C2_2 + x2_3 = C2_3
- C1_2 + x1_3 = C1_3
G. node3 finds that the rangeproofs for C3_3 and C2_3 are invalid, so removes them. That leaves:
H. node3 builds kernel kern3 using excesses from C4_3 and C1_3:
- kern3 - built using excess = x4_3 + x1_3
I. node3 sorts the final outputs. Let’s say that ends up
[C1_3, C4_3]. This is the transaction’s output list.
J. node3 sends node2:
- out =
- kerns =
- indices =
K. node2 remembers what it sent in step (E), so indices
[2,4] means C4_2 and C1_2 were kept. It then builds kern2:
- kern2 - built using excess = x4_2 + x1_2
L. node2 remembers what it received from node1 in step (C), so translates the indices to get
M. node2 sends node1:
- out =
- kerns =
- indices =
N. node1 remembers what it sent in step (C), so indices
[2, 3] means C4_1 and C1_1 were included. It builds kernel kern1:
- kern1 - built using excess = x4_1 + x1_1
O. node1 includes C4_0 and C1_0 as the inputs, giving a final transaction of:
- out =
- kerns =
[kern1, kern2, kern3]
- inputs =
Transaction offset not included here, but node1 can easily include that as part of kern1 creation in step (N).
It’s not easy to keep track of all of this. We could benefit from better notation. Hopefully I didn’t make any mistakes.
Yeah, we have 3 options:
- Have offset be 0
- Have one node set offset
- A mixnode picks a random scalar
k*Gkernel and then adds
sum(private_excess_i) - kto offset. This requires passing also the offset backwards though.
@phyro thank you for the reply and clarification.
I was making a bad assumption re: routing. My concern would be without a dynamic routing, it only takes one node to be unavailable and all mixing would stop? The risk would be the mixing could be completely disrupted by dos’ing only one server? I know this adds a ton of complexity to address though so I’m not sure how much (if at all) it is worth talking about now.
@scilio thank you very much for the explanation, it is clear now. I’ll leave a proper review in your PR in the next few days but on first pass it looks like you accomplished everything set out in your first milestone, nicely done, glad to have you here.
You’re correct. If one node is not available, it becomes impossible to peel the onion and the process comes to a halt.
I like this option. It resembles what both sender and receiver do in late locking; picking a random kernel and setting/adjusting their offset accordingly.