Good morning, community! You may know me as Grim GUI creator, I also contributed at Mimblewimble node and wallet repositories.
Time is moving fast, 3 years ago I joined GRIN as “investor”, later found it had no working node on Mobile (alternative implementation Grin++ crashed here and later was abandonned), so I decided to start work on Grim and launched first Grin Rust node on Android and iOS. A lot events happened since, from hackers to trolls attacks, this is normal competition in cryptosphere.
Now we see Grin has no maintainers at all, we have list of open problems like sync issues, outdated libs (i.e. lmdb crate - it needs to compile grin for Android twice), experiments with new developments (mwixnet by @scilio and contracts by @oryhp ), p2p network, where everyone can launch peer at any device, even without fixed IP address (i.e.: GitHub - cyphernet-labs/cyphernet.rs: Cyphernet is a set of libraries for privacy-preserving networking apps ), possibly with Proof-Of-Work protection (we just need do adopt open-source miner, as convertion of @NicolasFlamel work to Rust: GitHub - DarkHorse0725/cuckatoo-mining: Convert cuckatoo mining into Rust ) and some other opened issues.
My pleasure is to work on Grin full-time, I want to offer myself as full-time contributor with salary 300000₽ per month (approx. $4000), I can demonstrate my work with online video stream, so everyone will see what we are doing. Thank you.
26 Likes
navie
November 13, 2025, 4:50pm
2
I am in favor. We need someone with Rust knowledge to review atomic swap code so a full-time commitment will go a long way.
4 Likes
No, no, no if we will invite russian hackers to this vocation it will be final death of grin project.
chdsk
November 14, 2025, 8:32am
5
I am in favor. One of the few who participates in development.
4 Likes
I welcome this offer .
Me and others might be able to make an occasional pull request, but we need someone on “watch duty” to review and merge these requests to keep development and maintenance going.
4 Likes
trab
November 14, 2025, 6:24pm
7
Are there any “old timers” that would be willing to at least review pull requests before merge?
Could be worthwhile to even pay a small fee to old contributors for the review work.
5 Likes
The community has spoken.
As i stated at last night meeting, A qualified volunteer, Ardocrat, has stepped forward . The only thing standing between Grin and progress is a grant and the technical privileges (commit, review, merge) to go on development . Any further delay is a deliberate choice to reinforce the Status Quo .
GGC and other team members should act.
And @ardocrat , if you truly believe this project needs development, i guess you would accept a role where you are granted the privileges (commit, review, merge) and accept decentralized development. Otherwise It suggests this isn’t a real attempt to lead, but a performance.
6 Likes
I consent.
Currently I am reviewing pool api to check why txs are not broadcasting from my node.
4 Likes
@ardocrat ,
This is one of the best news I’ve heard.
Thank you for all of your hard work.
I’ve been in CT campaigning for Grin & you coming in as the warden/maintainer is truly a breath of fresh air.
The Grin Mimblewimble Privacy protocol is truly cutting edge & second to none. Its a disservice to go under-utilized in the age of privacy awakening.
If you’re ever on CT or open to Vlad costea for an interview, just let me know. I can set one up, I’ve set one up with Dr Marek before so we can set a larger audience highlight on GRIN…
Otherwise Godspeed & blessings.
5 Likes
We had interview with paranoiamachinery while ago.
4 Likes
Thank you for that @ardocrat my friend.
I’ll pass it on to Vlad, keep an eye out on your CT / X account. All the best.
2 Likes
@ardocrat Congratulations , this funding request has been officially approved by the community and the Grin Governance Council (GGC).
Let us know when you officially start working full time on Grin so we can plan timely payments and keep track of your work.
16 Likes
@ardocrat
I have been studying ZCASH’s rise in price & utility.(As I’m a Zodler myself)
I’ve had long conversations with GROK Ai & I believe that GRIN Mimblewimble has an opportunity to capitalize on Zcash’s strategy.
Zcash is popular in Binance exchange in BRICS nations & brings privacy to citizens in Russia,China,Brazil ,& INDIA.
If we can get GRIN Listed on Binance , it’ll be one of the easiest methods to catch up to zcash.
I hope you find this useful.
2 Likes
I think after 2029, there are some bugs, QPC hysteria and inflation above 10% yet.
Any bank can list Grin in the future.
CEXs are like banks.
If we will integrate Grin at DEXs, it will be much better.
We just need Multisig.
4 Likes
@ardocrat
Here are the top 3 DEX(es)
[Multisig,YES]
Open to List GRIN Mimblewimble:
Bisq
Haveno
DCRDEX
Banks(when GRIN usage MC goes up)
Kraken
Gemini
Binance
There is most definitely an opportunity to capitalize on BRICS countries, hope this helps Ardocrat.
We’ll keep in touch.
Banks will use CBDC to restrict p2p even more, now we can use cash to exchange freely, GRIN as alternative must have.
5 Likes
trab
November 25, 2025, 11:05pm
18
Not sure what is needed for Haveno but here is this again Suggestion: Make DEX - Gate.io is Over - #119 by trab
1 Like
Core development tasks for April:
master ← GetGrin:full_scan_fix
opened 11:14PM - 10 Apr 26 UTC
- Save last `ScannedBlockInfo` on collect outputs.
- Provide `start_pmmr_index`… from saved `ScannedBlockInfo` to not scan from the beginning after interruption.
master ← GetGrin:pibd_fixes
opened 10:19AM - 22 Mar 26 UTC
Fixed loop for kernel segment request:
- do not calculate theoretical pmmr siz… e when total segment count equals current segment count
TODO: Timeout is not working properly for rangeproof request
master ← ardocrat:pibd_peers_fix
opened 12:29PM - 30 Mar 26 UTC
- choose peers based on minimal height (minimal 2 blocks behind max tip)
- temp… orary block peers for stale segments disconnecting only outbound:
- 1st time for 1 min.,
- 2nd time for 3 min.,
- 3rd time for 10min.,
(Actual only for inbound, outbound resetting counter after reconnect)
- force request for output and rangeproof segments to avoid stuck
- do not check for max cached segments on selecting next desired segment for request (stuck happened when peer got disconnected and we reached max segments cache size, so not tried to make another request).
master ← ardocrat:peers_fix
opened 11:08PM - 03 Apr 26 UTC
- removed marking of random peer as healthy
- added `Unknown` state for new pee… rs, received from `PEER_LIST` request
- add `last_attempt` field to peers, update when state is changing to `Defunct` or `Healthy`
- check random 128 (64 healthy, 32-64 unknown, 32-128 defunct) peers no more often than 1h since `last_attempt`
- mark peer as `Defunct` when ping not passed and it got disconnected
- do not save connection time for new peers (a chance to clean them up to do not wait for 2 weeks)
- do not crash on `grin-config.toml` parse with DNS failure
- reconnect to seeds when 1st request failed on empty database to avoid sync stuck
- do not save outbound peers to connected list when there is enough outbound peers + disconnect immediately (useful for peers check)
master ← ardocrat:lmdb_update
opened 11:31AM - 16 Apr 26 UTC
Migrate from non-maintained [lmdb-zero](https://docs.rs/lmdb-zero/) to [heed](ht… tps://docs.rs/heed).
- Speed up of sync as result
- No more stuck at parallel commands while syncing
- Single environment for `peers` and `chain` data, migrate existing dbs outside default `lmdb` environment (solves https://github.com/mimblewimble/grin/issues/3461)
- Ability to use multiple shared environments (per path).
- Use separate databases instead of prefixes, use default database for values without prefixes, migrate old environment (https://github.com/mimblewimble/grin/pull/3320)
In progress:
master ← mkorovkin2:mkorovkin2/addressing-todo-optimize-pmmr-segments
opened 12:34AM - 09 Jan 26 UTC
Addressing TODO item in code: optimize PMMR segments to only include hashes at p… runing boundaries; tests updated as segment size dropped from 521 to 281 bytes (7 hashes -> 1 hash)
---
name: Pull Request
about: Pull Request checklist
title: 'Optimize PMMR segments to only include hashes at pruning boundaries'
labels: ''
assignees: ''
---
* This PR addresses the TODO at core/src/core/pmmr/segment.rs:338: "optimize, no need to send every intermediary hash"
* **PROBLEM:** During PIBD sync, segments are generated containing leaf data, intermediate hashes, Merkle proofs. Previously, the from_pmmr function included hashes for every position in prunable segments, even when many of these hashes were unnecessary for validation. This resulted in unnecessarily large segment payloads being transmitted over the network
* **SOLUTION:** identify "pruning boundaries" (positions where a fully pruned subtree meets a non fully pruned sibling). Only hashes at these boundaries are actually needed during segment validation (when the root() function encounters a (None, Some) or (Some, None) case and must look up the missing child's hash)
* **CHANGE IMPACT:** in the ser_round_trip test case, this optimization reduces the number of hashes from 7 to 1, shrinking the segment from 521 bytes to 281 bytes (46%); real-world savings will vary depending on pruning patterns but any segment with partial pruning will see reduced size
* **CONSENUS BREAKING?** No. (this change does not affect consensus rules)
* **BREAKING CHANGES:** API change: The Segment::from_pmmr() function signature changed:
* ---> Before: from_pmmr(segment_id, pmmr, prunable: bool)
* ---> After: from_pmmr(segment_id, pmmr, bitmap: Option<&Bitmap>)
* Callers update:
* ---> false to None (for non-prunable segments like kernels, bitmaps)
* ---> true to Some(&bitmap) (for prunable segments like outputs, rangeproofs)
* **TESTING:**
* Unit tests: All existing segment tests pass `cargo test -p grin_core unprunable_mmr` / `cargo test -p grin_store --test segment` / `cargo test -p grin_chain --test bitmap_segment`
* All unit test pass and they're pretty reliable here since they generate segments of varied pruning patterns, immediately validate those segments, and cover a number of edge cases
* The ser_round_trip test assertion was updated to reflect the reduced hash count, serving as verification that the optimization is working as intended
* Manual build testing: build verification completed with cargo build. no new warnings introduced by this change.
* Manual testnet sync: manual testnet sync; fully works and passes; ensured that node runs and syncs.
* Manual local testing across two nodes: synced node (node1) and fresh node (node2)
* node1 had synced full chain data, node2 connected to node1 and requested segments
* Flow tested:
* node1 calls Segmenter::output_segment() / rangeproof_segment() which uses the modified from_pmmr() to generate optimized segments
* node1 serializes and sends segments
* node2 deserializes and calls segment.validate() to verify the segemnt
* node2 successfully reconstructs the chain state
* ---> can see that optimized segments (with fewer hashes) survive serialization, network transmission, deserialization, and validation by a real node
TBD:
Fix headers sync slowdown with a lot outbound peers.
Migrate grin-wallet to new DB after merge
…
7 Likes