Also getting this issue on some of the nodes on the same server after running the patch, latest script etc.
How to setup your own node in a blink of an eye (Now support multi-nodes per host)
Please use the latest blink script and then the patch will no longer be needed.
From your log above, itâs most likely that the validator keyâs format is wrong. Please make sure that your keys are all correct, check the /home/incognito/validator_keys.
After correcting the file, remove the docker container, then start the updater again.
docker container rm inc_mainnet_0
sudo systemclt start IncognitoUpdater.service
If the problem still happens, please send the output of the following command, your /home/incognito/validator_keys and log file to @Support (donât post them here). I will take a look
journalctl --since "2 week ago" | grep IncN
Hi, sorry been out for the weekend.
That file contains only:
validator_key_1,validator_key_2,validator_key_3
OK, absolutely my bad! I let it go with default values and thought it would somehow create them⌠Itâs up now. Deeply sorry.
Good morning all. I am using this script, latest version, to run multiple nodes. Since yesterday the monitor tells me I am not running the latest tag anymore, currently I am on:
However the nodes wonât update themselves through the IncUpdater service for some reason. When I run it manually the script returns:
Getting Incognito docker tags
Current tag |20220921_1| - Latest tag |20220921_1|
Not sure why I am told there is a newer code version available?
Nevertheless I tried to manually update only one (!) of my nodes as a test as some of my nodes are currently earning and I donât want to stop those. So I stopped and removed one container but canât get it started again. When I run ârun_node.shâ It will tell me the above, so no newer tag found, and the script will stop. Is it only possible to restart all nodes? If so thatâs far from ideal imho for people running more than one node on a machine. How would I manually only restart that one docker image?
Looking forward to a possible solution,
Thanks scooter
Thereâs a problem with the node monitor, you donât have to worry about it.
Your node is up to date already.
Ok cool - thanks! However, how do I restart the one docker image I killed? Would the only way be to stop/rm all of my containers and then restart them all? Or is there a way to only restart one? It happens like this node is the last entry in my âvalidator_keysâ file and I expected the script to check if all nodes are running and restart the ones that are not. Any other simple way to get the one to restart?
If you manually stopped and removed the container then youâll need to manually rebuild it.
Reference the code in run_node.sh and run with the variables for the node that was stopped and removed.
Or simply stop and remove all containers and run the script again⌠all working again now!
Hey @Rocky,
I LOVE this script - THANK YOU. I recently had to rebuild my vNodes due to some weird thing causing my nodes to restart every 18-24 hours. Anyways, by using your script it was super easy and Iâm back up and running with the latest version update.
Question: Is there a way in your script to configure the âReward Accountâ addresses for each node during the vNode deployment process?
The reason I ask is the default is for received rewards from being in the committee are sent to individual Funding Account wallets. For those of us running multiple nodes, it would be great if there was a way we could specify a different wallet address for each node, that way we could consolidate the earnings to 1 or 2 different wallets.
If this feature is not available in your script would you consider (if possible) adding something like this similar to the way we specify our multiple validators:
E.g. validator_key_1,validator_key_2,validator_key_3
by having a similar setting for reward payment addresses for each validator:
E.g. rewardAddress_1,rewardAddress_2,rewardAddress_3
Your thoughts on this?
This script only makes the docker containers and does not have anything to do with staking. The staking TX takes place when you click the blue Stake
button in the app under the power screen.
The best way to do this would be to add text fields to the app that allow the user to input an address for the rewards prior to the stake button being clicked.
Hey @Jared, thanks for the excellent explanation on where would be the best place to implement such a customisation.
Your idea above makes very good sense. I support this idea.
All - wondering if the latest update covers the needed updates for running a fullnode? Looks like the blink script doesnât include an option to run a fullnode? Iâve used the previous script but that appears to be crashing after the most recent update.
For this, you will have to edit the blink.sh, change the âFULLNODEâ config in the script before running it.
Or if you already run the blink script, then you can just update the âfullnode=1â config in the file /home/incognito/run_node.sh
. Then remove the docker container and start the updater service again (sudo systemctl start IncognitoUpdater.service) for the change to take effect.
Donât worry about removing the container, the nodeâs data wonât be removed. Once the updater setup a new container, the old data will be used, and your full node will resume operation from where it was before.
Is there a flag for uninstalling?
3. To uninstall, run:
# sudo ./{this script} -u
the fix doesnât execute on my end and returns:
sudo bash -c ârm -f blink.sh && curl -LO https://raw.githubusercontent.com/incognitochain/incognito-chain/production/bin/blink.sh && chmod +x blink.sh && ./blink.sh -y2â
-f: ârm: command not found
sorry, double quote on the post is different with the one on the terminal
Please just remove both of the double quotes and type them again on keyboard. Iâm sure it will work.
I also edited the post, you can refresh this post and copy the command again.
I have a feature request.
Is it possible to have the updater check whether a container is actively validating and delay the update?
I understand that if you had enough validators running on one machine and they are spaced out effectively, this may completely prevent updates, so Iâd suggest updating all containers except for the ones that are active, and update the delayed ones when they are no longer validating. That is how the hard-link script works.
I ask for this feature, as I do have a lot of containers running on my machine, and it takes more than an hour for them to all to checksum the beacon and shard data and become available. I have had updates hit in the middle of a node validating, and even been slashed a couple of times due to it.