Elaboration on differences between LEAN miner and MEAN miner?

Hi, could anyone elaborate more?
So far to my understanding they are based on 2 different methods to find the cycle, but how different?

the two endpoints of edge ‘x’ are e0=hash(block_header, x, 0) and e1=hash(block_header, x, 1). In pruning algorithm what you need to figure out is whether there is another edge ‘y’ with the same endpoint (cuckaroo) or almost same endpoint (cuckatoo).

Mean algorithm does this by sorting edges by endpoint. You need to store at least the edge index for each edge (31 * 2^31 bits for C31). You may even store the hash so that you don’t need to recompute it repeatedly for the cost of additional memory.

Lean algorithm reserves two bit arrays with size 2^31 for C31. It then uses one array to keep track of active edges and the second array to keep track of edge endpoints.

In mean algorithm you need much more memory but you access it mostly sequentially. In lean you use less memory but you read/write single bits. This is inefficient in computers and GPUs but it may be made fast in custom hardware (ASICs).

So on CPU/GPU you only do mean mining. You can do lean if you don’t have enough RAM but you will be wasting power compared to other miners with enough RAM. For ASICs we expect vendors to implement lean efficiently.

Thanks, I have another question.
Can you get the same solution from these 2 miners? Or their process to generate edges and find solutions are totally irrelevant?

If there is a solution in the graph both approaches can be used to find it and will find the same thing.

In practice you make intentional errors, forgetting an edge if it doesn’t fit into your well defined memory of the gpu target; in theory however both should make the same solutions.

Simpliest answer I know… LEAN uses less memory and MEAN uses more giving faster results :slight_smile: Always want to have enough for MEAN :slight_smile: