Fan-out graph obfuscation

I’ve been thinking about graph obfuscation a bit. Here’s a simple fan-out that obfuscates the transaction graph going forward which, unlike other models, doesn’t require other parties to contribute inputs/outputs.

The key idea is that you don’t need others to obfuscate your transaction graph. Instead, we construct N outputs and claim that 1/N is our real output. So unlike Monero which constructs N “inputs” and proves that one of these was spent, we construct N outputs and claim that one of these will be spent with some value.

I may have missed something, but IMO it’s worth thinking in this direction as well because it’s a completely trustless model for graph obfuscation.

The link above gives an example with 50 outputs, but we don’t need this many. This is merely to present the idea, 10 is likely good enough.

6 Likes

To me, this forward noise looks a lot like a community-built dark forest for others to rest safe.

Interesting thought direction.
I do not fully get:

  1. how the decoy outputs can be worthless/zero value
  2. who controls these decoys after sending them, the receiver controls them all? I would think it would be more obfuscating if some decoys would go to the sender of if the receiver would add decoys of him/her self to give to the sender (obfuscate direction of the transaction).

You can make a commitment to 0, by creating and outout of form 0*H + r*G.

The owner controls them. The idea is really simple, you just create several tx graph continuations with the outputs that were created and only one of the graphs will really have your change output which blinds everyone from knowing which graph is your real continuation.

1 Like

I think this idea is especially interesting when combined with other simple obfuscation techniques.
E.g. if the receiver also commits some real value outputs, also contributes to the transaction fee, as well as possibly some decoys of his or her own. In that way I think you get the most obfuscation with the least amounts of additional outputs/bloat.
-Do I understand correctly that the total number of decoys per user does not increase since you reuse them, so no blockchain bloat like with ring signatures?
-Are rangeproofs per output, if so would 10 decoy outputs increase the blockchain since with approximately a factor of 10 (since range proofs take up most of the space)?
-Do the decoys have other effects on scaling of the blockchain? E.g. will the chance be much lower for a transaction that all decoy outputs in a transaction are spend and as such the transaction and its range proof can be removed from the blockchain?
Anyhow, just some thoughts.

For every output, you create N decoys e.g. 10. Each of your 11 outputs that were created in a transaction continues it’s own transaction graph path, but only one of the graphs holds your change output. That’s really the main idea. It doesn’t matter much if others do the same or not, you make your own destiny. Pay more, get better privacy (assuming you run a node and txs overpay a bit so that inputs can be added). Now you have 1/11 obfuscation which is the same
obfuscation probability as Monero (though a bit different).

But if the other would join in, you would remove information about the direction right? Even better, if you would swap decoys you start obfuscating more I would think.
-A, I think I get what you mean. It does not matter if the other joins in or not since they are decoys anyhow

You already solve the problem without others. Adding others in doesn’t “solve it more” :wink:

1 Like

This is almost like Monero, except the decoys are sent into the future rather than being pulled from the past.

1 Like

Yes, but it has two big problems:

  1. Bloat (even if temporary, it’s a big hit on throughput and bandwidth)
  2. Random people merging into your graphs makes your tx graph unreliable in case of a reorg

I find the unreliability that comes from that an extremely unwanted property. The only way I’d actually consider this viable would be if we had a consensus with well defined finality so that you could rely on things even when random people freely merge your graphs. Unfortunately, Nakamoto consensus only has probabilistic finality.

Here’s the description of what I’m talking about Chain reorgs and graph merging | grinvestigation - may contain flaws.

1 Like

Ah, interesting. reorgs have the potential to be quite catastrophic! In this hypothetical, I think it would be even worse for grin with this approach because inputs are only merged with other very recent transactions. Monero has the entire history to draw from for its ring signatures and would be less likely to be affected by a chain split/reorg. Still though, I think they bias toward selecting recent transactions… it’s unclear to me.

What if grin adopted a kind of ring signature scheme? would cut through still be possible? imagine an output is spent and creates a ring signature with n possible inputs. From then on, all other outputs that happen to be in that group must also create a ring signature using the same inputs. Once the number of group ring signatures matches the number of inputs, then the entire group could be pruned from the tree all at once.

Of course I have no Idea if that works. And even if it did, there would have to be some enforced and random enough way to determine groups, otherwise a malicious actor could spend an output and create a ring signature group with outputs He wanted some kind of information on. Really just thinking out loud.

If that was possible, it would save the bloat of transmitting more transactions (although ring signatures would be larger) but pruning would be reduced since a group couldn’t be removed until all outputs in a group were spent.

I think that would improve reliability in the case of a reorg because you are drawing on transactions that are already confirmed, not potential future ones.

I’m currently reading through some monero documentation so that’s why I keep comparing these projects so much :laughing:. And I’m really just a layman when it comes to understanding this stuff.

P.S. also if you draw from past transactions instead of sending zero outputs and mixing them in with future transactions that happen to pass by, you don’t have to hope that they included a large enough fee to fit your extra piece.

I think Monero favors recently created outputs to simulate real transactions - analysis showed that majority of outputs get spent rather quickly. This could be people moving to/from exchanges or something. This is why the “ripe” definition simulates real transactions, otherwise you could easily see different spending patterns of decoys which would allow your to recognize them.

Mimblewimble achieves cut-through with algebraic cancellation. I don’t think you can do this if you have ring signatures that need to be verified by new nodes, but maybe I’m wrong. For example, how would a new node joining the network verify the past ring signatures?

I’m not sure about that, since your transaction would still depend on other random subgraphs. If an input in your transaction was one of the invalidated ones, then your transaction would also become invalid. Similar thing for any ring using your outputs.

Always be learning :+1:

I could be wrong, but I think cut through may still be possible. all new inputs generated from that group would still be referencing the same group. If you know all the inputs and outputs of a given group, couldn’t you just perform algebraic cancellation then? I’m not sure if this is correct, but I believe a ring signature has the same properties as a single signature. Monero does use Petersen commits too, after all. The difference with monero is that every ring signature is unique, but they still use the ring signature to prove that a set of outputs and inputs equals zero. If a given group stays in that group, then the algebraic sum of inputs and outputs should equal zero once the total number of inputs and outputs match up.

1 Like

I see what you mean, but I’d actually need to read up on the ring sigs a bit to confirm. But I do think it suffers from the same reorg problem, right?

Yeah, you are probably right there, but that more depends on how old the other transactions are. But that can be mitigated by choosing older transactions to mix with. But then that would be a trade off between better reliability and better anonymity… I’ll try and look into how monero handles this problem. Surely they have run into this issue?

At least looking at the wallet, it looks like it recommends 10 confirmations before you send funds received. If the majority of users wait that amount of time, you might be able to hide together in a set of outputs that are at least a little more stable. It really depends on how quickly the average output gets spent after creation.

It’s not that hard to find out this, so I’d hope they are aware. Yeah, it depends how much confirmations the ring decoys have. Awesome, let me know if you find something.

P.S. consider joining grin team on keybase if you have not yet

1 Like