Running a Grin++ public node (Linux)

Hi! if you have a small VPS and want to contribute to Grin by running a public node… I wrote a small python script to download the latest Grin++ release and extract the binary file of the node into a desired folder. The bash script can be find here. You will need python3 to run the script. You can run it like this:

# python3 --help
usage: [-h] [--prelease] -d DESTINATION -b BINARY

Grin++ Downloader

optional arguments:
  -h, --help            show this help message and exit
  --prelease            Download prelease if the prelease is the latest release (default: false)

required named arguments:
                        Destination folder
  -b BINARY, --binary BINARY
                        Name of the binary

Let’s make sure you have what you need:

$ sudo apt install -y curl jq python3
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3 is already the newest version (3.8.2-0ubuntu2).
curl is already the newest version (7.68.0-1ubuntu2.7).
jq is already the newest version (1.6-1ubuntu0.20.04.1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

To get the script you can use curl. The URL of the raw file might change, so I do recommend to double check the url:

$ curl --silent -O

Excellent! Now let’s run the script:

$ python3 --prelease --destination /usr/bin --binary grin
Getting the latest release...
        Getting assets for v1.2.7-beta.1...
Downloading GrinPlusPlus-1.2.7-beta.1.AppImage file...
Extracting Grin node...
Stopping Grin node...
Cleaning up peers database...
Copying GrinNode binary...
Copying tor folder...
Assigning execution permissions...
Removing unnecesary files...

Tip: Start Grin node in brackground by executing the next command:
                nohup /usr/bin/grin > /dev/null 2>&1 &

To run the node in background you could run it like this:

$ nohup /usr/bin/grin > /dev/null 2>&1 &

With nohup we make sure that the node will remain running after we logout.

The last step is to make sure that the node is running:

$ ps -ax | grep grin && curl -s | jq
   748 ?        Sl     6:41 /usr/bin/grin
 47308 pts/0    S+     0:00 grep grin
  "chain": {
    "hash": "0001255cf5ddf2d6ea6cd7c3366e4f619a4271efa2b1f4bbf77ac9a983ce2b1d",
    "height": 1575018,
    "previous_hash": "0001b128a386eb979926fedb76cd47ef2023afdab0da9078057a8b2f9ae2f1a1",
    "total_difficulty": 1876778791393289
  "header_height": 1575018,
  "network": {
    "height": 1575018,
    "num_inbound": 50,
    "num_outbound": 10,
    "total_difficulty": 1876778791393289
  "protocol_version": 1000,
  "state": {
    "download_size": 0,
    "downloaded": 0,
    "processing_status": 0
  "sync_status": "FULLY_SYNCED",
  "user_agent": "Grin++ 1.2.7"

Make sure to allow incoming transactions on port 3414 to serve others. Example:

If you want to manage the number of peers, open the config file here: ~/.GrinPP/MAINNET/server_config.json and set the MAX_PEERS, and MIN_PEERS values. Example:


Could you please explain or point me to a documentation, where I can find server_config.json documentation?

It would be great to:

1 Like

systemd to auto start GRIN++ node.

I did write a small systemd service to auto start the GRIN++ node.


Description=Grin node++

ExecReload="/bin/kill -HUP $MAINPID"
ExecStart=/opt/grin/grinPP/grin --headless > /dev/null 2>&1 &


adjust your WorkingDirectory and ExecStart paramentes as well as the User !

Place this file under /etc/system/system as grinPP.service.

You can now start the service using:

  • systemctl start grinPP.service

You can now stop the service using:

  • systemctl stop grinPP.service

You can now check status of this service using:

  • systemctl status grinPP.service
 grinPP.service - Grin node++
     Loaded: loaded (/etc/systemd/system/grinPP.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-01-18 10:37:05 UTC; 8min ago
   Main PID: 103473 (grin)
      Tasks: 84 (limit: 4557)
     Memory: 371.1M
        CPU: 32.088s
     CGroup: /system.slice/grinPP.service
             ├─103473 /opt/grin/grinPP/grin --headless > /dev/null 2>&1 &
             └─103511 /opt/grin/grinPP/tor/tor --ControlPort 3423 --SocksPort 3422 --DataDirectory /grin/.GrinPP/MAINNET/TOR/data3423 --HashedControlPassword 16:906248AB51F939ED605CE9937D3B1FDE65DEB4098A889B2A07AC221D8F -f /grin/.>

Jan 18 10:40:23 grin[103511]: Jan 18 10:40:23.000 [notice] Heartbeat: Tor's uptime is 0:03 hours, with 9 circuits open. I've sent 488 kB and received 4.74 MB.
Jan 18 10:40:53 grin[103511]: Jan 18 10:40:53.000 [notice] Heartbeat: Tor's uptime is 0:03 hours, with 8 circuits open. I've sent 498 kB and received 4.75 MB.
Jan 18 10:41:23 grin[103511]: Jan 18 10:41:23.000 [notice] Heartbeat: Tor's uptime is 0:04 hours, with 8 circuits open. I've sent 505 kB and received 4.75 MB.
Jan 18 10:41:53 grin[103511]: Jan 18 10:41:53.000 [notice] Heartbeat: Tor's uptime is 0:04 hours, with 10 circuits open. I've sent 511 kB and received 4.76 MB.

To enable this service at reboot:

  • systemctl enable grinPP.service

logrotate to rotate GRIN++ logs.

/etc/logrotate.d# cat grinPP 

/grin/.GrinPP/MAINNET/LOGS/* {
    rotate 3
    size 10M

This will rotate your logs on a daily basis.

  • rotate 3 indicates that only 3 rotated logs should be kept. Thus, the oldest file will be removed on the fourth subsequent run.

Might make sense to put the tooling scripts in some repository to avoid getting them lost in the forum posts.

1 Like

Good idea. We’re deploying few nodes and testing a bit first.


e.g. here:

1 Like

Today i ran my first node and it was easy as cake

i had some issues but @davidtavarez helped me.
at one point, my node seemed to be stuck, means that chain height and network height were out of sync with nothing left to do.
The solution was to stop the node, remove peers and restart everything

killall -9 grin
nohup /usr/bin/grin > /dev/null 2>&1 &
$ ps -ax | grep grin && curl -s | jq

to make sure that your node is fully synced, you need to make sure that chain height = network height.
i have around 47 inbound connections, means that we need more node on our lovely network. i hope that helps :smiley: