How to setup your own node in a blink of an eye (Now support multi-nodes per host)

Yes, you just need Infura only.

There are many things that can cause the node to be slashed. We will have to take a look at the log to find the cause. If your node is slashed, feel free to contact us we can help investigate.

Prior to slashing taking affect can we have an indepth post about it. For instance, common reasons for slashing and how to prevent each reason.

Yes, we will do that, thanks for the tip.

1 Like

Hey guys,
I’ve just update the script and this guide to support running multiple node per host.
Please check it out!

4 Likes

Thank you so much! It’s great to see an official script for running multiple nodes!!

Do we need to change either of the following?

NUM_INDEXER_WORKERS can be left un-change.

INDEXER_ACCESS_TOKEN does not need to be changed but it should be changed.
It helps prevent unauthorized access to the coin indexer on your node, which can cause the node to use up all the computing power (when the RPC “authorizedsubmitkey” is called) for indexing coins instead of doing its main job (block verification, inserting to the chain…).
You can find more about coin indexing here: Full-nodes' Cache - A Faster Way to Retrieve Users' Output Coins

3 Likes

Does validator nodes use the indexer at all? When I first read it I thought it was only for full-nodes to cache user coin balance for transaction creation? But I’m a bit confused about the v2 details so far, so I might be wrong. :relaxed:

Yes, the coin indexer is only used for caching coins, it can be disabled if you have no desire to query balance or create transactions on your node.
I will add an option to the setup script allowing disable/enable the cache later.

1 Like

Thanks for the explanation @Rocky.

I setup my node using the previous method and I can speak to this issue 1st hand, my node didn’t get an update and was sitting there idle for more than a month (no PRV earning were coming in) but everything looked fine from a “docker stats” point-of-view.

It was only until I notice my node was offline on the Node Monitor (Shout out to @Jared for bringing that site to my attention).

With the kind help from a member of this amazing community, he helped me isolate this problem which lead me to believe the update script on my node got killed and my node didn’t update.

Thanks for this post Rocky, I was able to get my node up and running in a blink of an eye :smiley:

4 Likes

Woot! :pray: :pray: :pray:

1 Like

A couple of things I’ve run into:

The container start part of the script does not attempt to stop the running containers before removing them. That will throw an error:

IncNodeUpdt[12235]: Remove old container
IncNodeUpdt[12431]: Error response from daemon: You cannot remove a running container XXXX. Stop the container before attempting removal or force remove

So the docker run step that follows will not succeed as the container name is already in use:

IncNodeUpdt[12440]: docker: Error response from daemon: Conflict. The container name “/inc_mainnet_2” is already in use by container “XXXX”. You have to remove (or rename) that container to be able to reuse that name.

As noted in another thread, none of the docker commands have their exit codes checked. The inc_node_latest_tag file still gets updated even if any of those steps fail. That could be caused by temporary issues (like the pull failing due to network issue). As written since there’s no retry logic, the update script won’t attempt to update the containers again until the next image update. Since nodes will be slashed if they are running old versions of the software this is problematic.

Lastly, the current logic around fetching the current image tag is somewhat fragile. Since it relies on a sort, there’s a chance this will fail to pull the correct tag in the future, which could cause major network issues. I would recommend the team publish new images with an additional tag that will always point to the correct images (like a ‘current’ tag). That way clients can just periodically pull incognitochain/incognito-mainnet:current to keep up to date.

A consistent tag is the only thing preventing me from using pure docker-compose at the moment. Periodically running docker-compose pull && docker-compose up -d would be much be a lot more reliable.

2 Likes

hi, @heavypackets,
About the “cannot remove a running container” error, it has been fixed.
And I agree with you that a consistent tag would be the best way. But the current method has been used for a long time and up until now, many things depend on it, thus changing it now will cause us more problems. But we will consider making the change in the future. For now, we just have to work with it.

1 Like

Just to clarify, though, you can always push an image to dockerhub with the additional tags. This would be a non-breaking change, the old tag method will continue to work. But anyone who wanted to track a stable tag could do so. This is an idiomatic way to use tags.

Before pushing the image, you would tag it locally with as many tags as you want and then push with --all-tags flag.

PS. I was an engineer at Docker for a bit. Happy to help out with any of this stuff if community input would be helpful.

4 Likes

This is great, should we apply this @duc?

1 Like

Hi do you need one infura key per virtual node.

Thanks
Jay

No, one is enough for all.

2 Likes

I am getting the following error message when I check on the startup/installation log sequence using journalctl:

IncNodeUpdt[3333]:Create new docker network
IncNodeUpdt[3375]: Error response from daemon: network with name inc_net already exists
IncNodeUpdt[3333]: Remove old container
IncNodeUpdt[3384]: inc_mainnet_0
IncNodeUpdt[3459]: inc_mainnet_0
IncNodeUpdt[3333]: Start the incognito mainnet docker container

My node on the node monitor appears to be working/syncing. I am using the most recent inc_node_installer.sh . When I list docker containers, I don’t see any “inc_net” containers or any other containers than “inc_mainnet_0”. Not sure what network with “inc_net” refers to or how to prevent this error message. Any suggestions?

Hello @Socrates,

In this case the error you’re seeing is fine. The script attempted to make a network for your docker container with the name of inc_net, since this network was already created the script output an error.

In the event you run multiple nodes on the same host computer then they will all use the same docker network (inc_net).

2 Likes

Thanks for the clear response.

The last line of my startup/installation log sequence is this:

IncNodeUpdt[3888]: + set +x

Is this normal and what does it do/confirm?

I believe this is used to write all executed commands to a log file for debugging purposes later on.

2 Likes