How to free up space if you own multiple vNodes

Important

I’m not maintaining the code anymore. If you are looking for a hassle-free way to host your nodes, consider my hosting service: no private keys required and monthly payments based on earnings.

Hosting service for vNodes. Website fully automated

Also: don’t update to versions beyond 2.0. Those are not for intended for the public but for my hosting service.


I have created a node script that will automatically create hard-links to every .ldb file that you may have duplicated in your /home/incognito directory.

I’ve tested it my self and nothing has changed with my nodes: all of my nodes are still working the same, except that now I have around 200 GB more as free space.

You could create a cronjob to run this once a day without needing to worry if your nodes are or not in committee, because the script will check that for you and skip them if they are, to prevent slashing.

https://github.com/J053Fabi0/Duplicated-files-cleaner-Incognito

Update :sauropod:

You could use the Deno version, with some extra features, like setting your own instructions or copying from a fullnode. I recommend everyone to use Deno instead of Node.

https://github.com/J053Fabi0/Duplicated-files-cleaner-Incognito/tree/deno



If this helped you and you want to say thanks, consider doing so in the memo of a donation:

12sdoBt4XsFUmNiyik3ZiuKMZnA9wv8cTf6QLctNQTeMrCu5HEkSXjPZF7KC2ncfLuGTW9sAUAeU59gVpoyydtWtP2KcYrrCuNRfWJ4q5jsqpaRwLet8TLLvgH6zoBaYr7dDoSH8WVYqpPycLzG3

10 Likes

This is super awesome @J053, I believe vnode operators would appreciate your great work (including myself to be honest)
By the way, the protocol team was also figuring a few solutions to reduce blockchain data size, we will publish a proposal once we finalized it internally to get community feedback then.
P/s: sending some prv to your wallet now :blush:

6 Likes

Btw there is an typo or unicode error in provided address, it should be:
12sdoBt4XsFUmNiyik3ZiuKMZnA9wv8cTf6QLctNQTeMrCu5HEkSXjPZF7KC2ncfLuGTW9sAUAeU59gVpoyydtWtP2KcYrrCuNRfWJ4q5jsqpaRwLet8TLLvgH6zoBaYr7dDoSH8WVYqpPycLzG3

2 Likes

:warning: Important update :warning:

I made a significant update to the script. It now doesn’t need you to insert the instructions manually inside constants.js, but in order to do that it now requires the public validator keys instead of the public keys, which are not optional.

I would advise anyone who would like to update the script to delete its directory and set it up from scratch following the README.md.

3 Likes

There is a short discussion here Multiple vNodes on same host, re-use storage?
So isn’t it going to be a problem if 2 nodes try to use shared data/files ?

1 Like

Hello @radonm,

@J053’s script uses hard links which reference the files so there will be no issue for your nodes.

This is highly beneficial and recommended.

1 Like

Hard links means same file in different folders so it is shared data, it is the same file.

The script only hard links the old files. Basically uses your first node to bootstrap the other nodes. Past that point the nodes will use their own directory to store files.

It is the same concept as using my bootstrap script but saves space instead of copying the bootstrap files to each directory.

1 Like

It only hard-links .ldb files, which are static and never change, since they store the blockchain. It leaves the other files that need to change between nodes intact so the node can still work independently, as it should.

2 Likes

I fixed a little bug in the code. Please update it with git pull.

Great news. I updated the code to use Deno :sauropod: instead, and I added some bug fixes and extra features you could read in the new README.md.

https://github.com/J053Fabi0/Duplicated-files-cleaner-Incognito/tree/deno

2 Likes

@J053 can you provide a step by step guide on how to do this?

I am really interested as my devices are at 90+% capacity already but I am still a novice when it comes to Linux and this kind of code. I tried figuring it out on my own using the GitHub readme but am completely lost.

Good at following step by step instructions though :smiley:

On which steps are you lost? I can help you with particular questions.

So the problem is I am running multiple nodes on 2 hdds and I can’t afford to screw up the server if I mess this up. So I’d have to set up a test box just to test out and try this unless I am certain I know what I’m doing which up to now I do not.

I looked at the 3 example files and I can’t quite figure out which to use and how to configure. I am not running a fullnode so I am guessing I don’t need the third example file. But probably get he 2nd?

So if I have let’s say 10 nodes

Inc_mainnet_0 through inc_mainnet_4 running on hdd1 at location /hdd1/incognito/nodes/node_data…

And inc_mainnet_5 through inc_mainnet_9 running from location /hdd2/incognito/nodes/node_data…

How would I configure constants.example.ts?

Interesting case. The script can only handle one homePath, so you’ll have to copy and pase the folder twice, for example having Duplicated-files-cleaner-Incognito_1 and Duplicated-files-cleaner-Incognito_2.

On each you then can use example 2 with homePath set to /hdd1/incognito and /hdd2/incognito, respectively. For the one in charge of hdd1 you only set the nodes that are inside that directory, and the same for the other.

For the hdd2, the validatorPublicKeys would look something like:

 validatorPublicKeys: {
   5: "14nUEc4Yh...",
   6: "1BoLvywGD....",
   // and so on...
   9: "1Ehsgg9LE....",      
},

because you’ll manage only nodes from 5 to 9.
Inside instructions, only include the instructions for nodes from 5 to 9.
Then the same logic applies to the directory dedicated to hdd1.

1 Like

@J053 I am still working on getting this implemented. For my example above with 9 nodes. Let say the shards are distributed this way. For this part of the example, lets assume all 9 are on the 1 hard drive

shard1 --> nodes 1 and 5
shard2–> nodes 2 and 3
shard3–> node 4
shard4 --> none
shard5–> none
shard6–> node 7, 8, and 9
shard7 -->none

I am trying to understand how to configure the constants.ts and don’t quite get fully

toNodesIndex and fromNodesIndex

Can you help me figure this out?

For simplicity, you will want to use constants1 as this will automatically pull the shard assigned number for you.

So then why would I need constants2?

That’s only if you wanted to set the rules for which containers hard link to one another.

I do want to do that. i only doing this on 20 of my 40 nodes, so i need to define them.

Can you help answer my original question? or @J053 can you assist me please?