Yo Dawg, I heard you like CoinJoins

The upcoming Grin++ release (was supposed to be this weekend, but I’m delaying again) will be as simple as:
image

For all other wallets, the API is super simple. Just make a JSON-RPC request via TOR to grinjoin5pzzisnne3naxx4w2knwxsyamqmzfnzywnzdk7ra766u7vid.onion/v1 with a method of submit_tx and the params field simply needs to contain a field called tx which contains the JSON serialization of the transaction.

8 Likes

Sweet. Does the GrinJoin server see all transactions in their de-aggregated form?

3 Likes

Yes, but at least it’s not an unknown person running a sniffer node.

4 Likes

Yep. Scout’s honor I don’t do any logging. But since cryptocurrencies don’t usually rely on Scout’s Honor, better solutions are needed.

7 Likes

Does this GrinJoin option replace dandelion stem phase?

2 Likes

Yes, dandelion would actually result in less privacy, so we just skip it and use tor instead.

2 Likes

I wonder if it’s possible to blind the transactions before they are sent to the grinjoin server? Can we figure out a way to blind the transaction so that the grinjoin server doesn’t see which output is being spent? After aggregating the transactions, the blinding factors are shared between the join participants (not the grinjoin server), then each participant can derive the final joined transaction by removing all known blinding factors. Now each participant can broadcast the single join transaction to the network.

Another way of looking at this: A grinjoin server takes anonymous transactions and aggregates them into a blinded join transaction. The server then sends back the blinded-join-transaction to each anonymous participant. The participants then exchange with each other their secret blind factors to get the committable join transaction.

There would need to be a way for theses participants to communicate with each other anonymously as well. Possibly using Tor? The grinjoin server could include each participants supplied form of communication.

EDIT: Here’s what I’m thinking.

Step 1: Participant has a transaction they wish to commit anonymously to the blockchain. They create an asymmetric key pair. They use the private key as the blinding factor to create a blinded-transaction. They also use the private key to sign their communication-method. The communication-method could have different supported protocols for communication, but preferably it’s an onion address form which the participant can receive requests. The participant sends the blinded-transaction and the signed communication-method to the grinjoin server as a payload.

Step 2: The grinjoin server receives the payload and waits for more payloads. As the server receives more payloads, it joins the blinded-transactions together until a certain amount of joins which can either be specified as criteria in each payload or set to a standard default number (let’s call this max-join). Once the number of transactions reach the max-join, the server sends the blinded-join-transaction and each participant’s communication-method to each participant using their communication-method.

Step 3: A participant receives the blinded-join-transaction and a set of communication-methods. The participant then sends their private key (created in step 1) to each other participant using their communication-method.

Step 4: Concurrent to step 3, the participant receives private keys from each other participant. The participant must verify the private key using the signatures included in each communication-method. Once all private keys for each communication-method have been collected, the participant then removes each private key (blinding factor) from the blinded-join-transaction. The outcome is a join-transaction commitment ready to be submitted to the blockchain.

Note: This concept depends on the ability to add and remove blinding factors to join-transactions. This means blinding-factor arithmetic must be associative.

3 Likes

Nice work David! Does it run in a fixed interval or does it wait for a threshold?

4 Likes

Fixed interval. Leaning toward 10 minutes. Thresholds just open up the possibility of sybil-like attacks.

4 Likes

I sort of follow what you’re saying, but there are a few parts that aren’t entirely clear. First off, confirm for me what you mean by “blinded-transaction”? What are we blinding exactly - kernel? Inputs? Outputs? tx Offset?

If the transactions are blinded, do you have any way of validating them before unblinding? If not, how do you deal with the case where the final unblinded Join transaction is invalid? It doesn’t seem like there’d be any way to know which pre-joined transaction was invalid, so it’s easy to DoS.

I’ll have some follow-up questions based on your responses to those few questions. Thanks!

2 Likes

@shush thanks for sharing, I’ve been thinking along the same lines myself, but am not knowledgeable enough. :slight_smile: One of the problems I’ve encountered, is that by sharing private keys (or “ways to unblind”) with other participants, these participants would be able to also decrypt the original message (the unaggregated transaction), wouldn’t they?

In my thinking so far, I’ve seen a lot of similarities between this problem and the Mental Poker problem, and the various decentralised shuffling protocols that’s come out of this, where an output/kernel would be the equivalent of a card to be shuffled. A big difference being that the exact deck of cards is not known to the participants in advance.

Virtue poker’s description of how they do the shuffle at the poker table seems relevant and interesting here.

Ignoring the DoS risk, tx validation, and other separate problems, if the main objective is to join inputs, outputs, and kernels in a way where the central party cannot trace the unjoined transaction back to a particular user, imagine a protocol where:

  1. Each input, output, and kernel, are equivalent to cards in three distinct decks (blue, red, green cardbacks), and a “player” is a user, and the GrinJoin server is the “dealer”.

  2. Each player first encrypts their own transaction(s) accordingly, hiding the true value of each card, and is equivalent of putting a “lock” on the cards.

  3. There is a protocol where players share the encrypted cards with each other, each player put its own locks on all of the cards, and shuffle them. At the end of this, nobody will know the true order of the deck.

  4. The GrinJoin server receives all the cards, does a shuffle, and puts their own lock on the cards.

  5. Similar to 3, the cards are now shared sequentially amongst the players, but now the players remove their own individual locks. Still, the last player receiving the cafrds cannot see their face value, as they still have the lock of the GrinJoin server.

  6. Finally, the last player sends the cards to the GrinJoin server, who can remove its own final lock and now has a verifiably shuffled deck without knowing which input/ouput/kernel are associated.

I don’t know if any of this makes sense, and it’s probably needlessly complicated. But each player would only need to trust themselves to shuffle the deck correctly in order for the outputs to be mixed. :slight_smile:

4 Likes

This is definitely an approach we could take. This protocol, although rather complex, actually works as is. But as mentioned, there’s no DoS protection. Since we’re not able to validate until the end, and since we can’t identify the culprit if it does fail to verify, it’s really easy to DoS (intentionally and unintentionally), and there’s no obvious path forward if validation does fail.

I’ll see if I can work some DoS protection into this scheme, but I’m not actually confident that it’s a problem that can be solved.

3 Likes

So I’m not sure how to do this the proper way. :slight_smile: It’s not clear to me how Virtue poker does it. Iddo Bentov (their advisor) has published many mental poker papers, they make some reference to commutative encryption but I’m not sure what/how they do it, and whether it’s even secure.

The idea has many weaknesses as is right now:

  • DoS (flooding of invalid transactions)
  • Honest or dishonest disconnects / protocol abortions (which we at least can identify)
  • Tx validation
  • …and the protocol itself leaks info, as Player 2 knows the structure of Player 1’s transaction (how many inputs, how many outputs etc) by counting the cards. Even though I’m sure this can be mitigated/designed away.

Whilst it is tempting to try to fix these problems straight away, I’d be curious to understand if we’re even able possible to get this dumbed down version to work. As I’m not fully convinced we can. And then if it works, we could then try to tackle the other problems. If we can’t get it to work naively, the others problems won’t matter.

Re the validation of transactions, I wonder if there’s some zk construction that would allow us to prove that:

  1. a transaction’s equation is valid;
  2. that the inputs are in the UTXO set;
  3. that these particular commitments are the ones that are in the “deck of cards” that PlayerX encrypted and passed on;

without revealing anything about the actual commitments or kernels themselves. But again, don’t know enough about zero knowledge protocols. :slight_smile:

4 Likes

On the line of what @shush was suggesting, I guess we would get 1) for free if we just blinded all sides by multiplying with a large enough number (our private key).

3 Likes

So if we’re willing to look past DoS protection, I think I can dumb this down quite a bit (at least, in my mind it’s simpler to reason about), by removing all of these locks altogether.

  1. Everyone connects to the tumbler, and uploads their kernel.
  2. The tumbler gives each of them a next person to talk to, and identifies 1 of them as the first node in a ring.
  3. The first node generates a random blinding factor R, and adds it to that user’s transaction offset, and then passes the result to the next node.
  4. Each node adds their transaction offset, and passes it to the next node until the last node, which adds their transaction offset, and passes it back to the first node.
  5. The first node subtracts the blinding factor R, and now has the sum of all transaction offsets, which is given to the tumbler.
  6. Every node opens a new circuit to the tumbler for each input and each output in their transaction (using random timers to prevent linking) and then uploads their inputs/outputs to the CJ server.

I’m fairly certain this design provides all the same security/privacy guarantees as your protocol, while removing the need to use locks, and eliminates any leaking about tx structure. It also provides a little bit of resilience against unintentional disconnects, because if the next node in the ring isn’t reachable, their kernel can be thrown away without having to start over.

I didn’t base this off of any existing protocol, just a random thought I got a few minutes ago, so it very well could have a blatantly obvious fatal flaw :slight_smile:

4 Likes

Nice! :smile:

So the logic is that only Node1 needs to generate R as once Node 2 adds their own offset, it’s still not possible to figure out Node 2’s offset unless you’re Node1, correct?

This feels a lot more simple indeed. A big plus that you can handle disconnects better! Shame that there’s a lot of circuit openings required in step 6 for all parties. Wonder if there’s a way to reduce all that somehow.

4 Likes

Yep, that’s correct, though the caveat is if an attacker submitted multiple txs and ended up being just before and after you in the ring, they could still learn your offset. I’m sure there’s a way of solving that too, but it’s something that could be tackled later.

Right, would be nice. If your only goal is to break the link from inputs to outputs, you could submit all inputs in one connection, and all outputs in another. But it starts to get risky if you’re submitting a bunch of inputs together.

4 Likes

I did some digging, and it turns out that this is Wasabi’s only goal. They actually submit all inputs, the transparent change output, and Chaumian blinded outputs in one circuit, and then unblind and sign in another.

So even if we don’t create a separate circuit for each input/output, the approach I described still improves on Wasabi’s privacy, as long as we also couple it with CoinControl and/or a coin selection algorithm that limits the number of inputs (for example, by using largest inputs first).

4 Likes

I had a brief chat with nickm (Tor developer), and he said opening 250 circuits (ie. 50 users * 5 circuits each) within a few minutes is not a huge deal. He still advised limiting the number of circuits wherever possible, and warned us of the importance of closing the circuits when we’re done with them. As long as we follow that advice, it sounds like we should be good to go.

10 Likes

Fantastic work can’t wait to using. Could it be federated so you can have randomly choose from trusted peoples hosted coin joins?

2 Likes