has any one tried a FPGA (field-programmable gate array) setup for mining grin yet?
We could, but where to find a bitstream?
im fairly certain you could get an algorithm put together by chinese coders for a fair price.
I’m going to borrow a riddle from auto racing circles to illustrate a point about mining Grin on FPGA.
Riddle: How do you make $1million racing cars?
Answer: Start with $10 million !!!
Specifically, mining Grin on FPGA is a not a good idea. Lean mining doesn’t fit well on an FPGA while mean mining makes you memory bound thus no better than a GPU:
- Memory bandwidth
The algo is memory bound, you will be no faster than a GPU.
- The majority of FPGAs in the field will be slower than a GPU. The limiting factor on the popular mining FPGAs (Xilinx VU9P chips and 1525/U200 boards) is DDR4 memory which is much slower than the GDDR5 and GDDR6 memories used on GPUs. For example, GDDR5 has 15x more bandwidth than DDR4 (ie. 225GB/sec vs 15 GB/sec if I did the math correctly).
- There are a limited number of FPGA using HBM2 memory and those will be comparable to GPUs as memory bandwidth is equivalent to GDDR5/6.
Generally FPGAs are priced several times more than a GPU. You can get a RX5700 for less than $500 while a slower DDR4 FPGA like the 1525 or U200 will set you back $3,500-$8,000. The HBM FPGAs will have similar price premiums. So your ROI will be several years on a FPGA.
While FPGA consume less power, the savings do not offset the price premium. A 150W reduction equates to under $100 savings per year.
Full disclosure - ePIC designs ASICs so we have a natural bias towards ASICs.
SQRL sells VU33P’s for circa $2000. 8gb HBM2 @ 610 GB/s
You could pick up XCVU33Ps boards for $1500 at one point
Thanks for adding the HBM2 board pricing. My comments about GPUs being better for Grin than FPGAs are still valid.
The VU33P provides similar memory performance to the RX5700 but is priced 5x more than a GPU (ie. $2000 vs $400).
So you are better off buying 5 GPUs to mine Grin. The whattomine C31 calculator will show that 5 RX5700, despite being >6x more power hungry, earns 2.8x what a single VU33P earns.
EDIT - Corrected prices for RX5700 to $400.
thank you for the structured reply. it was helpful and informative
You’re welcome. The Grin community rocks; they listen to logic and discuss openly.
I was on a thread on ETH Magicians and presented similar logic. The response I got was that FPGA exist because someone said they saw FPGAs in operation. [https://ethereum-magicians.org/t/progpow-audit-delay-issue/3309/103]
isnt the a new PCI-E 5 technology being developed with a theoretical through put of like 64-128 gb/s ?
with those new interfaces would it make grin more effective to mine or is it only the memory module size on the cards that matter?
PCIe Gen5 won’t even be close in performance. PCIe access to memory will experience high latency and therefore very slow for random access operations like edge trimming and bucket sorting. FPGA vendors are hopelessly slow to support IP for high speed buses.
By comparison, GDDR6 is >640GB/sec of low latency memory. I[m guessing that random memory access across PCIe5 is going to be slower than 12GB/sec sustained, as system memory will be the other bottleneck.
As another data point, single chip ASICs with embedded SRAM will have bandwidth in range of 24-64 TB/sec. That’s why our performance synthesis showed 25gps+ / ASIC on C31 compared to 1.8 gps on a 2080ti. On a PCie card with 6 ASICs, ePIC would have been 150 gps at the same 300 Watts as the 2080ti.