How to setup your own node in a blink of an eye (Now support multi-nodes per host)

We no longer use eth_mainnet docker image. We use Infura instead.

If only inc_mainnet is running and no settings for infura are included in the script, what functionality is lost regarding incognito? My nodes appear to be functioning normally.

Please check the post again, I believe there’s a reference to Infura setup guide already.

Sorry, I see the post now and have setup infura and restarted the node. One last question regarding infura: in what ways can I test or check that the infura connection to eth network is working?

@Socrates, let your node sync data and validate transaction for a while (few days i think) then check setting on infura to see if there is any query
mine, for example
image

1 Like

Just for clarification for the community.

The eth-mainnet docker container can be killed and we should be running infura now, correct? I had been running both.

Is there anything other than adding our infura api key that we need to do to insure not having our nodes slashed?

Yes, you just need Infura only.

There are many things that can cause the node to be slashed. We will have to take a look at the log to find the cause. If your node is slashed, feel free to contact us we can help investigate.

Prior to slashing taking affect can we have an indepth post about it. For instance, common reasons for slashing and how to prevent each reason.

Yes, we will do that, thanks for the tip.

1 Like

Hey guys,
I’ve just update the script and this guide to support running multiple node per host.
Please check it out!

4 Likes

Thank you so much! It’s great to see an official script for running multiple nodes!!

Do we need to change either of the following?

NUM_INDEXER_WORKERS can be left un-change.

INDEXER_ACCESS_TOKEN does not need to be changed but it should be changed.
It helps prevent unauthorized access to the coin indexer on your node, which can cause the node to use up all the computing power (when the RPC “authorizedsubmitkey” is called) for indexing coins instead of doing its main job (block verification, inserting to the chain…).
You can find more about coin indexing here: Full-nodes' Cache - A Faster Way to Retrieve Users' Output Coins

3 Likes

Does validator nodes use the indexer at all? When I first read it I thought it was only for full-nodes to cache user coin balance for transaction creation? But I’m a bit confused about the v2 details so far, so I might be wrong. :relaxed:

Yes, the coin indexer is only used for caching coins, it can be disabled if you have no desire to query balance or create transactions on your node.
I will add an option to the setup script allowing disable/enable the cache later.

1 Like

Thanks for the explanation @Rocky.

I setup my node using the previous method and I can speak to this issue 1st hand, my node didn’t get an update and was sitting there idle for more than a month (no PRV earning were coming in) but everything looked fine from a “docker stats” point-of-view.

It was only until I notice my node was offline on the Node Monitor (Shout out to @Jared for bringing that site to my attention).

With the kind help from a member of this amazing community, he helped me isolate this problem which lead me to believe the update script on my node got killed and my node didn’t update.

Thanks for this post Rocky, I was able to get my node up and running in a blink of an eye :smiley:

4 Likes

Woot! :pray: :pray: :pray:

1 Like

A couple of things I’ve run into:

The container start part of the script does not attempt to stop the running containers before removing them. That will throw an error:

IncNodeUpdt[12235]: Remove old container
IncNodeUpdt[12431]: Error response from daemon: You cannot remove a running container XXXX. Stop the container before attempting removal or force remove

So the docker run step that follows will not succeed as the container name is already in use:

IncNodeUpdt[12440]: docker: Error response from daemon: Conflict. The container name “/inc_mainnet_2” is already in use by container “XXXX”. You have to remove (or rename) that container to be able to reuse that name.

As noted in another thread, none of the docker commands have their exit codes checked. The inc_node_latest_tag file still gets updated even if any of those steps fail. That could be caused by temporary issues (like the pull failing due to network issue). As written since there’s no retry logic, the update script won’t attempt to update the containers again until the next image update. Since nodes will be slashed if they are running old versions of the software this is problematic.

Lastly, the current logic around fetching the current image tag is somewhat fragile. Since it relies on a sort, there’s a chance this will fail to pull the correct tag in the future, which could cause major network issues. I would recommend the team publish new images with an additional tag that will always point to the correct images (like a ‘current’ tag). That way clients can just periodically pull incognitochain/incognito-mainnet:current to keep up to date.

A consistent tag is the only thing preventing me from using pure docker-compose at the moment. Periodically running docker-compose pull && docker-compose up -d would be much be a lot more reliable.

2 Likes

hi, @heavypackets,
About the “cannot remove a running container” error, it has been fixed.
And I agree with you that a consistent tag would be the best way. But the current method has been used for a long time and up until now, many things depend on it, thus changing it now will cause us more problems. But we will consider making the change in the future. For now, we just have to work with it.

1 Like

Just to clarify, though, you can always push an image to dockerhub with the additional tags. This would be a non-breaking change, the old tag method will continue to work. But anyone who wanted to track a stable tag could do so. This is an idiomatic way to use tags.

Before pushing the image, you would tag it locally with as many tags as you want and then push with --all-tags flag.

PS. I was an engineer at Docker for a bit. Happy to help out with any of this stuff if community input would be helpful.

4 Likes

This is great, should we apply this @duc?

1 Like