Has anybody calculated the PRV yield of having multiple nodes?
Thanks Jared. That is per each node, I’m assuming? Running multiple nodes doesn’t change those numbers?
Yes, I believe an average over all to ensure any node issues do not impact the numbers.
It makes the numbers more stable. What I do is that I take the combined stats of my entire farm and then divide it to give you the average per node. This evens out the small differences between shards and epochs, and also as Jared mention occasional network outages or problems.
How many nodes can you host on one machine? Is there a hard limit, or is it limited by the size of your SSD?
Depending on your hardware, an unlimited amount can be ran.
You’ll need 2GB of RAM per node (I set this as a hard limit in my docker config) and ~80GB per node. SSD should be the minimum disk, however, I use NVMe and recommend it for faster write speeds (higher vote count). When you have multiple nodes on the same server you can save on disk space with this hard link script (I run it daily, sometimes more depending on how active the network is): https://github.com/J053Fabi0/Duplicated-files-cleaner-Incognito
If my calculations are correct each node has used on average ~33GB of data per month.
Since changing hardware, I have not found my node limit yet. I’m running 73 nodes on an [email protected], 32GB ram, and a 3 TB SSD. It’s pulling about 50GB of network data per day.
The SSD turned out to be way overkill. I bought it because I ran a lot of different storage tests in parallel before pruning was implemented. Now I use around 6-10% of the space, so a 500GB would have been enough.
How? Trade secret, that involve some custom scripting!
how to make 500GB last for 73 nodes?
The beauty of blockchain is that everyone has the same data, which is true for your incognito nodes as well. @Jared mentioned the duplicate file cleaner, that’s one way to do it. I solved it with ZFS instead.