Abstract
We present a coin shuffling proposal with the following properties:
- Users submit self-spends throughout the day. No interaction needed for shuffling.
- Shuffling is performed at the end of the day by a set of mixnodes that cannot steal any coins.
- Invalid self-spends are automatically filtered out. No need to abort or restart the shuffling.
- As long as at least one mixnode is honest, then no one learns the input output links.
- The size of the shuffle is limited only by blocksize and could easily be over a thousand.
- Each shuffle only grows the chainsize by a small constant (~100 byte), thanks to MW cut-through.
Widespread use of the protocol would leave the transaction graph mostly obscured.
Introduction
This document proposes a possible design for a ``trustlessāā Mimblewimble CoinSwap service to which users submit self spend data throughout the day (or hour or any desired period) which are validated, aggregated, and published at the end of the day.
Coins swaps are useful for obfuscating the Mimblewimble transaction graph. They can be applied to self spending outputs that were recently received, or to self-spends resulting from canceled/aborted transactions. The lack of urgency of such self spends lends itself well to the slow nature (and possible failure) of coinswaps.
Whereas self spends normally reveal their input-output link publicly in every mempool on the network, use of a CoinSwap service obscures this link from public view. If one further trusts at least one service node to be honest, then the link is obscured from the nodesā view as well. In this sense little trust is needed to benefit from the service. If all nodes collude, they not only learn the input output links, but could also choose to revert the entire coinswap after it confirms on-chain. Which is still a manageable problem for wallets, even if very unlikly.
This document is not concerned with aggregation of regular transactions, or what could be called a CoinJoin service. Consistent use of coinswaps on outputs received in regular transactions should make coinjoins mostly redundant, although it does come at a roughly doubling of fees.
This design follows the Grin forum discussion at Daily Aggregator (Partial Transactions) - #3 by antioch and combines ideas of users antioch, oryhp, vegycslol, and myself.
We first present the general design, and then discuss several refinements dealing with practical issues such as timing attacks, spam attacks and ownership proofs.
The CoinSwap protocol
MWixnet Architecture
The protocol is similar to that of mixnets, but instead of mixing messages from senders to receivers, we mix self spends from coin owners to themselves. Whereas the messages in a mixnet do not change, our mixnodes transform commitments into new ones, and then achieve mixing by sorting the results.
Let there be m self spends spend 1 ā¦ spend m, and n mixnodes node 1 ā¦ node n. We let subscript i range over spends, and subscript j range over nodes.
Let xi,j be the randomly generated excess by which node j should additively transform the commitment in spend i: Ci,j = Ci,j-1 + xi,j * G.
Conceptually Ci,0 and Ci,n are the input and output of spend i, except for a necessary fee adjustment:
- Ci,0 = Ciin - fee * H
- Ci,n = Ci,0 + (Ī£j xi,j) * G = Ciout
The excesses form a matrix such as this one for 4 self spends and 3 servers:
x1,1 x1,2 x1,3
x2,1 x2,2 x2,3
x3,1 x3,2 x3,3
x4,1 x4,2 x4,3
which correspond to the linkages in this matrix C of commitments:
C1,0 <ā> C1,1 <ā> C1,2 <ā> C1,3
C2,0 <ā> C2,1 <ā> C2,2 <ā> C2,3
C3,0 <ā> C3,1 <ā> C3,2 <ā> C3,3
C4,0 <ā> C4,1 <ā> C4,2 <ā> C4,3
Each node receives a sorted column, not knowing the i indices shown, transforms each commitment by some excess specified for that commitment, sorts the results, and passes on the new column to the next node.
Data provision
Node 1 is provided, for each i, with with a tuple datai,1 containing:
- commitment Ciin
- a proof of ownership of Ciin
- excess xi,1
Node n is provided, for each i, with with a tuple datai,n containing:
- commitment Ci,n-1
- corresponding range proof BPi
- excess share xi,n
Each other node j is provided, for each i, with a tuple datai,j containing:
- commitment Ci,j-1
- excess share xi,j
Input validation
Validation starts with node 1 computing, for each i, Ci,0 = Ciin - fee * H. It also verifies that each input commitment is unique, present in the UTXO set, and has a valid proof of ownership. All invalid data is removed (resulting in a smaller m).
Output derivation
Then for each j from 1 to n in turn, node j computes, for each i, Ci,j = Ci,j-1 + xi,j * G, and if j<n, sends the ordered set of m commitments Ci,j to its neighbour node j+1.
Any commitments received by a node that are not among its tuples are invalid and get removed.
Also, any non-unique Ci,j are similarly invalid and removed. This will allow correct determination of validity in the later kernel derivation stage.
Output validation
Node n removes all received commitments that donāt have valid range proofs.
Let Out be the set of remaining commitments.
Kernel derivation
Then for each j from n to 1 in turn, node j computes kernel Kj for a public excess of G * Ī£valid xi,j, and if j>1, sends Out, the set of n+1-j kernels, and the set of valid commitments Ci,j-1 to its neighbour node j-1.
This maintains an invariant that Ī£C ā Out C - Ī£valid Ci,j = Ī£k>j Kk.
Mimblewimble CoinSwap proposal - #57 by oryhp explains how to optimize this to a single kernel.
Aggregation
Finally, node 1 ends up with a set of valid C0,j that by invariance differs from the Out sum by all n kernels.
It constructs the final coinswap transaction CS from the set of valid outputs Ciout in Out, the set of valid inputs Ciin, the n kernels Kj, and an extra input and/or output to collect the leftover fees (or pay missing fees) as appropriate.
Once confirmed, the coinswap is irreversible unless all n nodes collude.
Practical Issues
Data provision
The data provision step above suggests that users send data directly to n different nodes. While possible, this is not the most practical.
The following improvement is inspired by TOR, The Onion Router.
Letās assume that each node j has a known public key PKj, and denote by ENC(PK, D) the pair <DHPK, E> where DHPK is a temporary public key generated by Diffie-Hellman for PK, and E is the symmetric key encryption of data D with the shared secret as key.
Then for each i we can recursively define an onion bundle OBi,j for nodes jā¦n as follows:
- OBi,n = ENC(PKn, datai,n)
- OBi,j = ENC(PKj, <datai,j, OBi,j+1>)
Now a user only needs to send OBi,1 to node 1, which can decrypt to obtain both <datai,1 and OBi,2. In the output derivation stage, each node j decrypts each OBi,j, checks that the received Ci,j-1 matches the one in datai,j (removing mismatches), computes Ci,j, sorts all OBi,j+1 by Ci,j, and passes those on to its neighbor node j+1.
This onion bundling not only prevents timing attacks on users whose spend data submisions to different nodes are correlated in time, but also allows them to send their data to a single node, possibly as soon as they receive the output they want to self spend.
Fee handling (Grin)
In a more realistic setting, each node pays itself a mixfee with a 1-input 1-output transaction replacing $K_j$, where Ī£valid xi,j can instead by added to the transaction offset.
The contributed fees should then match the coinswap relay fee plus n mixfees. In Grin this gives the identity (with FB being the fee base of half a millicent):
|Out| * fee = (|Out| + n) * (1 + 21) * FB + n * 3 * FB + n * mixfee, or
mixfee = |Out| * (fee - 22 * FB) / n - 25 * FB.
For simplicity, we could require a fee of (22 + n) * FB, resulting in a mixfee of (|Out| - 25) * FB. In that case, for a mixnode to earn 10 grin-cents a day requires 225 daily self spends.
Spam attacks
A deluge of bogus onion bundles could fill up node 1ās memory buffers long before the day is done. This is mostly remedied by doing immediate input validation on every new bundle received by node 1. At the end of the day, weād still redo the UTXO membership checks since self spend inputs could be spent during the day.
Thanks to the proofs of ownership, the number of self spends of any one attacker, surviving input validation, is limited by the number of outputs they own.
Having enough buffers to cover the entire UTXO set is one solution.
If after input validation at the end of the day, more spends remain than fit in a block (about 40000/22 = 1818) then two options are available:
- run a modified version of the above protocol that merely filters out all invalid data. Any invalidly spent input can be banned from future coinswapping.
- randomly partition them into equal sized (under 1818) subsets, and run coinswap on each in turn.
A proof of ownership of commitment C can take the form of a proof of a length-1 vector commitment as described in https://forum.grin.mw/t/vector-set-commitments-generalize-schnorr-and-utxo-proof-of-ownership/
which takes only 32 bytes more than a Schnorr signature.
Bandwidth Optimization for n=2
In the case of only 2 non-colluding nodes, we can do away with the (relatively large) proofs of ownership, and reverse the flow of data. That is, the onions are constructed with the (much smaller) node 1 data inside the bundle for node 2. Node 2 starts with output validation and sends transformed valid commitments back to node 1, which then does input validation on its transformed commitments. Kernel derivation then proceeds back to node 2. In this case itās possible to handle spam with intermediate filtering rounds. Even though both nodes determine valid sets of inputs and outputs, they would require collusion to link them together. One downside is that node 2 can try to correlate submission order with the order that final coinswap inputs appeared on-chain.
Other Bandwidth Optimization
Node n could send the large Out set directly to node 1, reducing bandwidth through intermediate nodes on the backward pass by about half, but at the cost of those intermediate nodes no longer being able to check the invariant.
Reorg protection
A deep reorg is likely to invalidate a big coinswap transaction. In order to be able to still redo most of the contained self spends, node 1 could remember all valid input onion bundles for the past few days, and reprocess any that were undone in a reorg.