# Introduction

This document proposes a possible design for a ``trustless’’ Mimblewimble CoinSwap service to which users submit self spend data throughout the day (or hour or any desired period) which are validated, aggregated, and published at the end of the day.

Coins swaps are useful for obfuscating the Mimblewimble transaction graph. They can be applied to self spending outputs that were recently received, or to self-spends resulting from canceled/aborted transactions. The lack of urgency of such self spends lends itself well to the slow nature (and possible failure) of coinswaps.

Whereas self spends normally reveal their input-output link publicly in every mempool on the network, use of a CoinSwap service obscures this link from public view. If one further trusts at least one service node to be honest, then the link is obscured from the nodes’ view as well. In this sense little trust is needed to benefit from the service. If all nodes collude, they not only learn the input output links, but could also choose to revert the entire coinswap after it confirms on-chain. Which is still a manageable problem for wallets, even if very unlikly.

This document is not concerned with aggregation of regular transactions, or what could be called a CoinJoin service. Consistent use of coinswaps on outputs received in regular transactions should make coinjoins mostly redundant, although it does come at a roughly doubling of fees.

This design follows the Grin forum discussion at Daily Aggregator (Partial Transactions) - #3 by antioch and combines ideas of users antioch, oryhp, vegycslol, and myself.

We first present the general design, and then discuss several refinements dealing with practical issues such as timing attacks, spam attacks and ownership proofs.

# The CoinSwap protocol

## MWixnet Architecture

The protocol is similar to that of mixnets, but instead of mixing messages from senders to receivers, we mix self spends from coin owners to themselves. Whereas the messages in a mixnet do not change, our mixnodes transform commitments into new ones, and then achieve mixing by sorting the results.

Let there be m self spends spend 1 … spend m, and n mixnodes node 1 … node n. We let subscript i range over spends, and subscript j range over nodes.

Let x_{i,j} be the randomly generated excess by which node j should additively transform the commitment in spend i: C_{i,j} = C_{i,j-1} + x_{i,j} * G.

Conceptually C_{i,0} and C_{i,n} are the input and output of spend i, except for a necessary fee adjustment:

- C
_{i,0}= C_{i,n}^{in}- fee * H - C
_{i,n}= C_{i,0}+ (Σ_{j}x_{i,j}) * G = C_{i}^{out}

The excesses form a matrix such as this one for 4 self spends and 3 servers:

x_{1,1} x_{1,2} x_{1,3}

x_{2,1} x_{2,2} x_{2,3}

x_{3,1} x_{3,2} x_{3,3}

x_{4,1} x_{4,2} x_{4,3}

which correspond to the linkages in this matrix C of commitments:

C_{1,0} <–> C_{1,1} <–> C_{1,2} <–> C_{1,3}

C_{2,0} <–> C_{2,1} <–> C_{2,2} <–> C_{2,3}

C_{3,0} <–> C_{3,1} <–> C_{3,2} <–> C_{3,3}

C_{4,0} <–> C_{4,1} <–> C_{4,2} <–> C_{4,3}

Each node receives a sorted column, not knowing the i indices shown, transforms each commitment by some excess specified for that commitment, sorts the results, and passes on the new column to the next node.

## Data provision

Node 1 is provided, for each i, with with a tuple data_{i,1} containing:

- commitment C
_{i}^{in} - a proof of ownership of C
_{i}^{in} - excess x
_{i,1}

Node n is provided, for each i, with with a tuple data_{i,n} containing:

- commitment C
_{i}^{out} - corresponding range proof BP
_{i} - excess share x
_{i,n}

Each other node j is provided, for each i, with a tuple data_{i,j} containing:

- commitment C
_{i,j-1} - excess share x
_{i,j}

## Input validation

Validation starts with node 1 computing, for each i, C_{i,0} = C_{i}^{in} - fee * H. It also verifies that each input commitment is unique, present in the UTXO set, and has a valid proof of ownership. All invalid data is removed (resulting in a smaller m).

## Output derivation

Then for each j from 1 to n in turn, node i computes, for each j, C_{i,j} = C_{i,j-1} + x_{i,j} * G, and if j<n, sends the ordered set of m commitments C_{i,j} to its neighbour node j+1.

Any commitments received by a node that are not among its tuples are invalid and get removed.

Also, any non-unique C_{i,j} are similarly invalid and removed. This will allow correct determination of validity in the later kernel derivation stage.

## Output validation

Node n removes all received commitments that don’t have valid range proofs.

Let Out be the set of remaining commitments.

## Kernel derivation

Then for each j from n to 1 in turn, node j computes kernel K_{j} for a public excess of G * Σ_{valid} x_{i,j}, and if j>1, sends Out, the set of n+1-j kernels, and the set of valid commitments C_{i,j-1} to its neighbour node j-1.

This maintains an invariant that Σ_{C ∈ Out} C - Σ_{valid} C_{i,j} = Σ_{k>j} K_{k}.

## Aggregation

Finally, node 1 ends up with a set of valid C_{0,j} that by invariance differs from the Out sum by all n kernels.

It constructs the final coinswap transaction CS from the set of valid outputs C_{i}^{out} in Out, the set of valid inputs C_{i}^{in}, the n kernels K_{j}, and an extra input and/or output to collect the leftover fees (or pay missing fees) as appropriate.

Once confirmed, the coinswap is irreversible unless all n nodes collude.

# Practical Issues

## Data provision

The data provision step above suggests that users send data directly to n different nodes. While possible, this is not the most practical.

The following improvement is inspired by TOR, The Onion Router.

Let’s assume that each node j has a known public key PK_{j}, and denote by ENC(PK, D) the pair <DHPK, E> where DHPK is a temporary public key generated by Diffie-Hellman for PK, and E is the symmetric key encryption of data D with the shared secret as key.

Then for each i we can recursively define an onion bundle OB_{i,j} for nodes j…n as follows:

- OB
_{i,n}= ENC(PK_{n}, data_{i,n}) - OB
_{i,j}= ENC(PK_{j}, <data_{i,j}, OB_{i,j+1}>)

Now a user only needs to send OB_{i,1} to node 1, which can decrypt to obtain both <data_{i,1} and OB_{i,2}. In the output derivation stage, each node j decrypts each OB_{i,j}, checks that the received C_{i,j-1} matches the one in data_{i,j} (removing mismatches), computes C_{i,j}, sorts all OB_{i,j+1} by C_{i,j}, and passes those on to its neighbor node j+1.

This onion bundling not only prevents timing attacks on users whose spend data submisions to different nodes are correlated in time, but also allows them to send their data to a single node, possibly as soon as they receive the output they want to self spend.

## Fee handling (Grin)

In a more realistic setting, each node pays itself a mixfee with a 1-input 1-output transaction replacing K_j, where Σ_{valid} x_{i,j} can instead by added to the transaction offset.

The contributed fees should then match the coinswap relay fee plus n mixfees. In Grin this gives the identity (with FB being the fee base of half a millicent):

|Out| * fee = (|Out| + n) * (1 + 21) * FB + n * 3 * FB + n * mixfee, or

mixfee = |Out| * (fee - 22 * FB) / n - 25 * FB.

For simplicity, we could require a fee of (22 + n) * FB, resulting in a mixfee of (|Out| - 25) * FB. In that case, for a mixnode to earn 10 grin-cents a day requires 225 daily self spends.

## Spam attacks

A deluge of bogus onion bundles could fill up node 1’s memory buffers long before the day is done. This is mostly remedied by doing immediate input validation on every new bundle received by node 1. At the end of the day, we’d still redo the UTXO membership checks since self spend inputs could be spent during the day.

Thanks to the proofs of ownership, the number of self spends of any one attacker, surviving input validation, is limited by the number of outputs they own.

Having enough buffers to cover the entire UTXO set is one solution.

If after input validation at the end of the day, more spends remain than fit in a block (about 40000/22 = 1818) then two options are available:

- run a modified version of the above protocol that merely filters out all invalid data. Any invalidly spent input can be banned from future coinswapping.
- randomly partition them into equal sized (under 1818) subsets, and run coinswap on each in turn.

A proofs of ownership of commitment C can take different forms.

- a rangeproof on C + G
- a rangeproof on C, that differs from the one on-chain
- if possible, a compact encoding of a small tweak of the on-chain rangeproof

## Bandwidth Optimization for n=2

In the case of only 2 non-colluding nodes, we can do away with the (relatively large) proofs of ownership, and reverse the flow of data. That is, the onions are constructed with the (much smaller) node 1 data inside the bundle for node 2. Node 2 starts with output validation and sends transformed valid commitments back to node 1, which then does input validation on its transformed commitments. Kernel derivation then proceeds back to node 2. In this case it’s possible to handle spam with intermediate filtering rounds. Even though both nodes determine valid sets of inputs and outputs, they would require collusion to link them together. One downside is that node 2 can try to correlate submission order with the order that final coinswap inputs appeared on-chain.

## Other Bandwidth Optimization

Node n could send the large Out set directly to node 1, reducing bandwidth through intermediate nodes on the backward pass by about half, but at the cost of those intermediate nodes no longer being able to check the invariant.

## Reorg protection

A deep reorg is likely to invalidate a big coinswap transaction. In order to be able to still redo most of the contained self spends, node 1 could remember all valid input onion bundles for the past few days, and reprocess any that were undone in a reorg.