Daily Aggregator (Partial Transactions)

This is an idea I have had bouncing around for a little while that takes advantage of our concept “partial transactions” (discussed previously in the context of decoys and incremental aggregation during relay) and the “daily aggregator” currently being discussed.

This is far from a fully fleshed out idea and is still very rough but I wanted to see what people thought of this.

tl;dr We can non-interactively aggregate partial transactions, breaking linkability “at rest”. The partial transactions can collectively pay the fees for the final aggregated transaction. And we can potentially incentivize the aggregator for doing this work.

Full Transactions

A “full transaction” consists of a set of input(s), a set of output(s), a kernel and an offset. For simplicity let’s assume transactions only contain a single kernel.

A -> [B, C], K, offset

Partial Transactions

We have previously discussed “self-spend” partial transactions (no kernel) as a way of adding decoy outputs or otherwise obfuscating existing transactions. It has been proposed that these could be aggregated or combined with an existing transaction given sufficient fee on the existing transaction.

A -> A’, excess

Alice, for example, could construct a “self-spend” partial transaction, moving funds from A to A’, by revealing the excess between the two commitments.

We believe partial transactions are safe to use in a “self-spend” scenario where Alice owns both A and A’. A partial transaction can be reversed given knowledge of the excess and is therefore unsafe for anything other than a “self-spend”.

In this scenario the original transaction sets the fee. Aggregation is only possible if the original tx had sufficient fee to cover the additional inputs and outputs from the partial transaction.

Partial Transaction Aggregation

Two partial transactions can be trivially aggregated -

A -> A’, excessA
B -> B’, excessB
[A, B] -> [A’, B’], (excessA + excessB)

Partial Transaction Fees

In the example above A and A’ have equal value. We are simply moving funds from one output to another.

But we could reduce A' by a small amount and introduce an imbalance. Let’s call this imbalance the partial transaction “fee”.

A -> A’, fee, excess

The fee is the difference between the two committed values and the excess the difference between the two blinding factors.

(r1*G + v1*H) -> (r2*G + v2*H), (v1 - v2), (r1 - r2)

Note that the introduction of fees affects the reversibility of these partial transactions. It is no longer “free” to reverse a partial transaction and the fee must be “refunded”.

Partial Transaction Aggregation (2)

If each partial transaction includes a small “fee” then several can be aggregated together such that the sum of the fees is now sufficient to cover the cost of an additional kernel. We can now produce a full transaction from several partial transactions with the necessary kernel excess and offset.

A -> A’, feeA, excessA
B -> B’, feeB, excessB
[A, B] -> [A’, B’], KA+B, offset

Note that this aggregation is still non-interactive, similar to aggregation of full transactions. Alice, or Bob, or Charlie can take multiple partial transactions and construct the necessary kernel to produce a full transaction. The difference here is we only need a single kernel in the resulting aggregated transaction.

Daily Aggregator

Aggregating full transaction on a delayed basis (daily, hourly, whatever) has the undesireable property of necessarily trading time for aggregation. Rather than a transaction confirming in ~60s we choose to wait 24 hours in an attempt to obfuscate linkability. This is maybe reasonable for some transactions but not others.

An alternative approach may be to aggregate “self-spend” partial transactions. We don’t attempt to break linkability within transactions, but between them.

Say we have two full transactions where one is dependent on the other. The concern is linkability across the shared output.

A -> B
B -> C
A -> B -> C

Aggregation of full transactions aims to obfuscate the linkability by increasing the number of possible links between inputs and outputs.

A -> B1
[B1, B2, …] -> [C1, C2, …]
A -> B1 -> C?

We can achieve similar obfuscation of linkability, not by aggregating full transactions, but instead by introducing aggregation between transactions.

A -> B
[B1, B2, …] -> [B’1, B’2, …]
B’ -> C
A -> B -?- B’ -> C

Here we are not attempting to obfuscate linkability of outputs “in transit” (within transactions) but instead “at rest” (between transactions). Introducing a 24 hour delay when funds are “at rest” may be a more acceptable user experience than delaying transactions themselves. A user could decide to aggregate overnight, for example. A user could receive a transaction quickly and then self-spend to obfuscate linkability with less concern over the cost of the delay.

Aggregation Incentives

This needs more thinking through of implications but given we can potentially include a small fee on each partial transaction sufficient for an aggregation kernel, it would be possible to include sufficient fee to cover an additional output, given a mechanism to reward the aggregator for their services. The fees from the partial transactions could pay for both the transaction fee and an additional aggregation fee claimed by the aggregator. This idea of incentivizing aggregation may be a bad idea, but it is worth thinking through.

10 Likes

Assuming I understand it right, it seems something that is definitely worth researching further. So the key idea is that you make a transaction and only later make the outputs unlinkable. This comes at the cost of paying more fees, but also at a great benefit of not needing to wait for the confirmation times to secure your funds. I think the wallet could do two things simultaneously (assuming the user doesn’t need to spend O1 soon):

  1. create your output O1 that will be used in the full transaction
  2. create the partial transaction O1 -> O2, fee, excess that is sent to the aggregator

The aggregator accepts the partial transaction O1 -> O2, fee, excess and waits for at most a couple of minutes for the transaction that creates O1 to land in the mempool - otherwise it drops this partial transaction. In theory, the party that gets these partial transactions could steal some of the fees by creating outputs, but since this is a trusted service in the case of the daily aggregator, that’s probably not really an issue.

So the output that the full transaction creates, acts as a temporary intermediary output.

Edit: What’s also good is that this comes at the cost of 1 kernel per aggregator broadcast, which in the case of the daily aggregator is 1 additional kernel per day.

1 Like

Yes. This would actually be working as designed - as long as sufficient fees exist to actually build the tx then all surplus fees can be claimed by the aggregator. In fact they would need to be for the transaction to balance. If the tx itself overpaid and used the surplus fees then others would simply take advantage of this spare capacity and pad the tx out with additional inputs/outputs etc.

Yes absolutely. This would be one way of implementing this in a relatively automated way.
And as long as full transactions took priority then it is trivial to “cancel” the partial transaction if it was decided to use the output before periodic aggregation took place. You simply double spend O1 before aggregation occurs (with an understanding of what that entails privacy wise).

This is a key point to this approach - its effectively free (in aggregate) in terms of on chain storage.

1 Like

Good point. From the UX perspective, you could have “unlink after receive” checkbox per transaction or even have this encoded in the address itself, that would hint you want to unlink these after they have been received. The wallet would just show these outputs as “unlinking… ~4 hours 32 min left”, while also giving the user the option to cancel the unlinking and spend these outputs.

1 Like

It would also be possible to reverse the order of operations here - unlink then transact, if parties were willing to wait for unlinking

This would be an extreme version of “late locking” - where parties had no visibility into outputs being spent in the eventual transaction.

These are just different ways of looking at what is effectively the same sequence of events -

(tx -> unlink) -> tx
tx -> (unlink -> tx)

This is always possible for expediency -

tx -> tx

And this is also possible for the paranoid -

unlink -> unlink -> unlink
1 Like

I really like the idea, I think that’s better than the normal daily aggregator because it doesn’t slow down the tx, but to get the full benefit from it, it would need to be on by default imo. Few questions:

  1. how would that influence the fees? My view is that since half of the outputs would be from the aggregator (if on by default) then “real tx” throughput would reduce by 50% which would probably increase fees when blocks are full. Is that a problem?
  2. i guess that’s a centralized aggregator, do we know how to prevent ddos or similar attacks?
1 Like

Bitcoin has two major kinds of graph obfuscation tecnhiques: coinjoin and coinswap. The former aggregates regular transactions, while the latter swaps around ownership of coins.
The proposed aggregation services for Grin similarly fall into two camps.
The original proposal was for a coinjoin service, while this thread proposes a coinswap service.
Each has their pros and cons. Coinjoins require no extra bandwith/fees but require patience and a backup publishing mechanism if the service fails.
Coinswaps can be run at leisure right after or inbetween regular transactions but require extra bandwidth and fees. While a weekly coinjoin service makes little sense, this would make a reasonable option for a coinswap service.
Having minimal impact on the number of kernels is very nice. A final advantage is that mathematically modeling a coinswap service and its requirements is much easier, which may boost its research and development.
On the whole then, I tend to prefer the idea of a coinswap service, and to focus our attention on those first. Once coinswap services are succesfully deployed, we can come back to revisit additional coinjoin services.

[1] CoinJoin - Bitcoin Wiki
[2] CoinSwap - Bitcoin Wiki

6 Likes

One way to prevent a coinswap server from learning individual input-output links is to perform the service as a Multi Party Computation (MPC).

The CoinSwap service shall be provided by n >= 2 servers CS_1…CS_n, each with a known TOR address, that are assumed to not all be colluding.

Suppose I want to self-spend my output C to C’ = C + x*G - fee*H.
The fee can be assumed to be fixed, e.g. to (1+21+r) * accept_fee_base, where r accounts for the swap service reward.
I will send one of the servers the following info

  • an offset x_0
  • C’
  • a BP range proof for C’

and the remaining n-1 servers will each receive

  • a commitment to x_0
  • a random offset x_i

where x_0 is chosen as x - Sum x_i.

At the end of each day, an MPC takes all these self-spend shares submitted throughout the day, and recombines them based on the x_0 commitment of each. Then the MPC computes C from C’ and the total offset, verifies each one (balance and input must be in UTXO while output is not), and aggregates all valid ones together into one big transaction, with an additional kernel and swap reward output.

Even n-1 colluding servers should be unable to link C and C’ together from this MPC output.

A DOS attack could submit many invalid tx shares, prompting a necessary MPC to filters them out. This MPC could turn out to be too expensive to deal effectively with the attack. That is the biggest weakness of this scheme.
It would require the MPC overhead to be relatively modest.

PS: Note that we can simply set n=1 to get a non-MPC scheme with minimal messages.

PPS: We can also start out with an n>1 service that starts out as non-MPC (e.g. each day the n servers pick a random leader to do the aggregation) but that aims to migrate to MPC in future with no change in interface.

1 Like

I think this is a nice idea and it works. There’s one problem though, which is that it has a DOS vector because a malicious actor can send invalid data and the services are not able to discard it until the end of the day. This means that the attacker could be sending lots of commitment_to_x_0, random_x_i to the nodes causing them to use more memory and eventually get their process killed because of the memory usage.

This comes with very similar guarantee as the scheme I described here, 2nd image where you can have N “obfuscators” in sequence which is that it works if at least one service is honest. Mine is actually a bit worse in guarantees and uses a bit more bandwidth+verification time. In your scheme, it is impossible for a single service to be evil and just broadcast the transaction, but it comes at the cost of making it impossible to validate the “coinswap transactions” before the end of the day which allows for the DOS I mentioned. My scheme provides transaction validation at each step, but a rogue service could broadcast early - though this could be easily identified from the wallets because while you would get the swap, you wouldn’t get the output you expected.

I think I just found a way to improve my scheme by guaranteeing that either all aggregators happen or just the first one - will update the other topic.

I wonder if it would be possible to have some form of blinded validation in your version that happens more frequently to avoid this DOS attack? Like offset the computation for some x*G in some way

Here are the properties of this idea as I see them.

We end up with a centralized service where:

  1. if it gets totally compromised => you end up with the same privacy as you have right now
  2. if it gets ddosed/shutdown => you end up with the same privacy as you have right now
  3. if it’s working honestly (1/N honest services) => it increases the anonymity set of the outputs significantly (to the point where I’d say we have better practical anonymity than Monero) assuming ~20 txs per hour

The downsides I see are:

  1. in case of a silent takeover, you’d think you have better privacy than you actually did
  2. you pay ~2x fees (which is cheap right now and IMO worth increasing everyone’s privacy). If it turns out to be too much, people can opt-out

The great thing is that the service itself has no impact on the actual transactions between the parties so regardless if it works or not, transacting isn’t impacted in any way as far as I can tell. It doesn’t even touch transactions and it only helps with privacy so the only thing that could get hurt is privacy which in the worst case stays as is now. Note that since the transactions are detached from the 1-1 coinswaps in this scheme, they could both be building confirmations in parallel. If the wallets defaulted to sending to the hourly aggregator, the average aggregation would take ~30 minutes and 1 hour in the worst case. In the worst case, we’d wait for our privacy 1 hour which is what Bitcoiners wait for for the confirmations.

P.S. Having a completely trustless service would be very nice. I am however not sure if it’s even theoretically possible to have one that is based on non-interactive aggregation and can handle chain reorgs - which imo kills all hopes of having it by default due to the highly increased damage :frowning: But I guess this is a question for professional researchers and hopefully someone proves me wrong.

1 Like

When a newly created output lands on chain, how hurtful is it if the aggregator knows that it’s one of its inputs? I feel like it shouldn’t matter, even if it’s evil? Unless you would send utxo to the aggregator for self-spend at some random time

You only end up with the same privacy if the compromise leads to all submitted transactions being made publicly available (as they are otherwise in mempools).

Very good point. It would still be private in the eyes of everyone but the attacker.

I think @vegycslol makes a good point. If you derive an input C from an output C', then the service that receives C' can check which outputs landed in the mempool or the chain at the time it receives C' and connect the input/output due to this timing attack. To avoid this timing attack, we should derive the output from an input. I’m not sure how to do this though.

Deriving the output is probably better also for other reasons e.g. simpler validation of inputs prior to aggregation.

1 Like

Good point.
That means the wallet should wait some random long time before submitting C’.

You can’t derive the range proof from the output.

Instead of sending commitments to x0, we send xi+1 which serves as a link between two adjacent services for this specific coinswap.

This means that every service receives xi and xi+1. Suppose that the last service Sn is the one that receives the outputs C' with a valid rangeproof for C' and xn (since it is the last one, xn+1 doesn’t exist).
Service Sn:

  1. starts with K = 0*G - this will be the kernel curve point
  2. computes C = C’ - xn*G which is interpreted as an input
  3. computes K = K + xn*G - we will be adding the kernel curve points (no signatures)
  4. sends C, K, xn to Sn-1

Service Sn-1 then interprets C it received as an output - let’s call it C'. It then finds xn-1 from xn because as we said, xi comes with a link to xi+1.
Then it repeats these computations:

  1. computes C = C’ - xn-1*G which is interpreted as an input
  2. computes K = K + xn-1*G - we will be adding the kernel curve points (no signatures)
  3. sends C, xn-1 to Sn-2

and this continues all the way to S1. After S1 finished this process, if the coinswap was valid, then the C should be one of the inputs in the UTXO set. This is because (Σxi)*G = kernel and everyone contributed their part. However, S1 can’t publish the transaction because it does not have the total private excess. It has only a kernel curve point which isn’t a valid transaction. It can then tell the others back whether this coinswap is valid or not, by communicating through these xi values which form a chain. If it was valid, S1 adds I, O, x2 to the valid_coinswaps_pool.
When S1 decides to construct the aggregated transaction. It takes all the inputs and outputs from the valid_coinswaps_pool and sums all the x1 values to obtain partial_private_excess - let’s call this transaction T. It then sends T and a corresponding x2 value for each output in T so that S2 then knows how to produce C' from C (when we aggregate we produce the output from the input). S2 then
computed C' for each C and adds the x2 values to the partial_private_excess. It then sends this new T and x3 for every output. This continues until Sn is done.

After Sn finished doing C -> C' transformation, it should have a valid transaction with valid inputs, outputs, rangeproofs and a private excess which it can use to construct the kernel and broadcast.

This scheme has order where S1 is the one checking whether inputs exist in the UTXO set and Sn is the one that gets all the final outputs, but the connections between these when computing an input are only linked via some adjacent random xi values.

I’m not saying this works and has no issues. Perhaps if we tweak this a bit, maybe we can achieve a similar functionality without a complex MPC.

2 Likes

Great idea to avoid a super expensive MPC by doing some communication rounds back and forth between the CoinSwap servers!

Let me combine some of our ideas for the following construction:

A client with self-spend C → C_0, where C_0=C+(Sum x_i) * G - fee*H sends CS_0 the following info

  • x_0
  • C_0
  • a BP range proof for C_0

and for i=1…n sends to each CS_i:

  • Hash(x_{i-1})
  • x_i

Then at the end of the day:

a) For i=0…n-1, CS_i sends to CS_{i+1}

  • the set (over all client submissions) of <Hash(x_i), C_{i+1}> where C_{i+1} = C_0 - sum_{j<=i} x_i * G

b)

  • CS_n computes a maximum subset S of unique valid C_n + fee * H (that are all in UTXO)
  • CS_n sends S to CS_0
  • For i=n…1, CS_i sends the set of valid Hash(x_{i-1}) and Sum_valid Sum_{j>=i} x_j to CS_{i-1}

c)

Now CS_0 knows both the set S of valid inputs and the set of valid x_0 and thus the valid BP, as well as the total offset Sum_valid Sum_i x_i and can construct the aggregate CoinSwap tx.

If any of the CS_i has its memory buffer filled up by a spam attack, then it can trigger a filtering computation among all CS that is similar to above but just removes all invalid data.

Instead of a client sending to each CS_i individually, it’s probably better to send a data bundle to just CS_0, which forwards the encrypted data packets contained in the bundle to their respective CS_i. That also ensures that CS_0 alone needs to trigger the spam filters.

3 Likes

exactly what i was thinking, no need for the users to know how to even send data to them, enough to know their public keys so they can encrypt the data for them. That way the attackers can only try to ddos CS0. I would suggest sharing data this way:

Client sends CSData0 to CS0 where CSData0 = encrypt({ x0, C0, BPC0, CSData1 }, P0) where

  • BPC0 is bulletproof for C0,
  • CSData1 is what he should send to the next hop (CS1),
  • Px is public key of CSx.

The total data for CSx>0 is a recursive formula:
CSDatan = encrypt({ Hash(Xn-1), xn-1, CSDatan+1 }, Pn)
where the last element (CSDatan) has empty CSData for the next element (since next element doesn’t exist).
Why do we need recursive encryption?
If CS0 gets C → C0 and

[
  encrypted({ Hash(x0), x1 }, P1),
  encrypted({ Hash(x1), x2 }, P1),
  ...
  encrypted({ Hash(xn-1), xn }, Pn)
]

then CS0 can send a pair <C0, encrypted({ Hash(xn-1), xn }, Pn)> to CSn, CSn would decrypt his data and remember pair <C0, xn>. At the end of the day, when the round would actually go through, CSn would then find out the input of his xn and he would have a link between the input and output when only 2 participants would be evil. By recursively encrypting it, you would need to have all participants evil to unlink it if i’m not mistaken. By spamming CS you can still reduce the linkability if CS0 and CSn communicate, eg: you have 3 legit C → C0, then someone spams CS0 so that it needs to start the “filtering round” before the end of the day. This means that after the filter only 3 inputs and 3 outputs will be left and CSn can be aware of all of this data if CS0 gave him the info.

3 Likes

There is one more potential attack that could be performed by S1. It can trigger the spam filter by some garbage data it created itself and thus unlink a single input->output. This could partly be battled by having Si (including Sn) not broadcast the correct excess if the anonymity set is too small . S1 could use real inputs, but then it would need to pay the fees - or maybe not if it removed them from the aggregated tx in the end.

It would be possible to prevent spam attacks by requiring a client to send a valid “punishment tx” which would contain a spam-prevention fee in a n-n multisig of CS participants with a locktime after which the client could reclaim it. CS would take the fee if the input was invalid or if the same inputs were sent twice (although we would need to be careful of not sending it twice from 2 devices in this case). It might not be the nicest solution though