Important: Update your Node to extend its life cycle.

Also someone in incognito channel wrote:

Looks like the new code update gets your account banned on google cloud
Resources associated with your project My First Project (id: ) are being suspended for violating our Terms of Service by mining cryptocurrency.

Hi @huglester,

200GB or 50GB data(I think now is 18GB for fullnode db v2 and less than if you’re validator node) are after node download block and fetch data from block(contain: block data and info which be fetched from block). In db v1, still sync the block as db v2, but we fetched and storage data with a new way to reduce a lot of space of disk. The fact that, db v1 sync faster than dbv2 a little bit, because it not process and reduce space on fetched data much, on fetch and write, not redue anything -> db v1 is very bad in storage with a big space.

We also have plan to profile and optimize write data of db ver.2. It may take 3-4 weeks to find out a better way


Hey @Peter, after I wrote this(Important: Update your Node to extend its life cycle.), 8 hours passed. However, the height of both nodes increased just about 5000. That means, a rough calculation is that synching both nodes will take 230000/15000 = 15,3 days. Even if I shut down one node, synching other node will take 7,15 days.

@huglester (Jaroslav) also is in very same situation.

I even tried to sync on dedicated server with 1gbit port. With NVME disks. Still same speed

But as I see we just wait. The only sollution for now

1 Like

Yes we noticed. Google Cloud users will have to find another host, unless Google changes its mind which they hardly ever do.

I’m really confused :slight_smile: Since the synching is slow, I check page frequently. I noticed that my node had some rewards just now. However, its shard id was -2. The reward is 0 in the figure since I withdrawn the reward before taking the screenshot.
Region capture 14

Then I checked and I saw that my node was in the committee of shard 6.

How is this possible? Is it so normal? @Peter

Could it be a bug? @mesquka

@abduraman hey. did you /data was truncated after this last update?

@abduraman The nodes page shows the data returned by the node directly. I’m considering changing it to display ‘syncing’ if the block height it reports is more than 1000 blocks below the current block height. Not sure at what point the node can start earning, need to investigate that a bit more.

1 Like

If you mean removing old data, then yes, I had deleted all of data.

Yes. Ok my data was truncated also :slight_smile:

Something now is changed. CPU load is dropped a lot.
so something was changed. Just not sure what :slight_smile:

Strange we got databases truncated also.
I think it was done to “fix” those who auto-update nodes but did not truncate old database.
Maybe there should be smth like “subfolder”

that way system would detect if its on “old” or “new” db format.

just my 2c.

1 Like

@Peter - what is current expected database size?
I jsut want to know when its is “synced” :slight_smile:

Hi @huglester So far I have 3 results: 1 is 4.7 GB, 1 is 7.3 GB, and 1 is 14 GB. If you are swapped in and out many times, there is a chance your Node synced the data from all the shards and it will reach the fullnode size: ~ 18 GB.

Hi @abduraman, it is normal because if we have enough signatures following the Consensus rule, block will be generated and it covers your Node while your Node is still synching.

1 Like

Yes. We have just make a new docker tag this morning, about 9:00 am VN time. We optimized something in read data. Some rpc which relate to token is slow. We fixed yesterday. Deploy it this morning

About truncating old db when update to dbv2, we can force it, but we can consider someone don’t want it because somehome, someone may want to keep old version and someone may try it manual … So the best way we need to give them a choice by follow this topic and update their node manually.


Im trying to run an update, but it seems Im stuck. What do I do here?

Status: Downloaded newer image for incognitochain/logshipper:1.0.0

docker: Error response from daemon: Conflict. The container name “/inc_logshipper” is already in use by container “dfe7a44b6c18b479b1af8ab39d3a964ffcab9843c0d228accf52da1c802b4aef”. You have to remove (or rename) that container to be able to reuse that name.

@huglester What is your situation after update? Is there any difference?

As of now, this problem is not a problem for me anymore :slight_smile: since my nodes join the committee and earn well time to time although the heights are so less than the current height.

well, the node seems up and running, but im not sure whether the update went through or now? Is there a way to check the size?

It’s fine to ignore this message. You can check the docker to see it is running with the latest tag by this command: docker ps
See also: How to check your Virtual Node

The size is various because it depends on which shard your Node is assigned to.

1 Like

Hey @Peter. When will the pnode option to update firmware be available?

1 Like