On Windows 10, on my command prompt, I go
> geth --rinkeby
Which start to sync my node with the network
On another command prompt, I go
> geth --rinkeby attach ipc:\\.\pipe\geth.ipc
And then
> eth.syncing
which gives
{
currentBlock: 3500871,
highestBlock: 3500955,
knownStates: 25708160,
pulledStates: 25680474,
startingBlock: 3500738
}
As you can see, I am always behind from the highest block by about 80. I've heard this is normal for the testnet. I created an account on Rinkeby and requested ether via the faucet: https://faucet.rinkeby.io/. I also tried https://faucet.ropsten.be/ but can't get ether.
On the geth console, I can show my account which gives
> eth.accounts
["0x7bf0a466e7087c4d40211c0fa8aaf3011176b6c6"]
and viewing the balance I get:
eth.getBalance(eth.accounts[0])
I don't know if this is due to my node being 80 blocks behind the highest node...?
Edit: It may be worth adding that I created a symbolic link from my AppData/Roaming/Ethereum on my C drive to another folder on my D drive as I was running out of space. (Don't know if that effects my sync)
I guess you faced with a problem known as "not sync last 65 blocks"
Q: I'm stuck at 64 blocks behind mainnet?!
A: As explained above, you are not stuck, just finished with the block
download phase, waiting for the state download phase to complete too.
This latter phase nowadays take a lot longer than just getting the
blocks.
For more information https://github.com/ethereum/mist/issues/3760#issuecomment-390892894
Stop the geth and start again. It’s pretty normal to be behind the highest block. For ether, check on etherscan once if you actually received the ether from the faucet or not. That way you will know on what block height you have received your either from faucet. Then wait till your geth sync till that block. Also the best option would be to use something like Quicknode where you don’t need to be concerned of always keeping your machine running or waiting for hours before continuing with Development work. Yes they have a small nominal fee, but for the service they provide it’s pretty worth it.
Related
I'm having problems with the configuration of an Elastic Beanstalk environment. Almost immediately, within maybe 20 seconds of launching a new instance, it's starts showing warning status and reporting that the health checks are failing with 500 errors.
I don't want it to even attempt to do a health check on the instance until it's been running for at least a couple of minutes. It's a Spring Boot application and needs more time to start.
I have an .ebextensions/autoscaling.config declared like so...
Resources:
AWSEBAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 200
DefaultInstanceWarmup: 200
NewInstancesProtectedFromScaleIn: false
TerminationPolicies:
- OldestInstance
I thought the HealthCheckGracePeriod should do what I need, but it doesn't seem to help. EB immediately starts trying to get a healthy response from the instance.
Is there something else I need to do, to get EB to back off and leave the instance alone for a while until it's ready?
The HealthCheckGracePeriod is the correct approach. The service will not be considered unhealthy during the grace period. However, this does not stop the ELB from sending the healthchecks according to the defined (or default) health check interval. So you will still see failing healthchecks, but they won't make the service be considered "unhealthy".
There is no setting to prevent the healthcheck requests from being sent at all during an initial period, but there should be no harm in the checks failing during the grace period.
You can make the HealthCheckIntervalSeconds longer, but that will apply for all healthcheck intervals, not just during startup.
im having some trouble with our mail server since yesterday.
First, the server was down for couple days, thanks to KVM, VMs were paused because storage was apparently full. So i managed to fix the issue. But since the mail server is back online, CPU usage was always at 100%, i checked logs, and there was "millions", of mails waiting in the postfix queue.
I tried to flush the queue, thanks to the PFDel script, it took some times, but all mails were gone, and we were finally able to receive new emails. I also forced a logrotate, because fail2ban was also using lots of CPU.
Unfortunately, after couple hours, postfix active queue is still growing, and i really dont understand why.
Another script i found is giving me that result right now:
Incoming: 1649
Active: 10760
Deferred: 0
Bounced: 2
Hold: 0
Corrupt: 0
is there a possibility to desactivate ""Undelivered Mail returned to Sender" ?
Any help would be very helpful.
Many thanks
You could firstly temporarily stop sending bounce mails completely or set more strict rules in order to analyze the reasons of the flood. See for example:http://domainhostseotool.com/how-to-configure-postfix-to-stop-sending-undelivered-mail-returned-to-sender-emails-thoroughly.html
Sometimes the spammers find some weakness (or even vulnerability) in your configuration or in SMTP server and using that to send the spam (also if it could reach the addressee via bounce only). Mostly in this case, you will find your IP/domain in some common blacklist services (or it will be blacklisted by large mail providers very fast), so this will participate additionally to the flood (the bounces will be rejected by recipient server, what then let grow you queue still more).
So also check your IP/domain using https://mxtoolbox.com/blacklists.aspx or similar service (sometimes they provide also the reason why it was blocked).
As for fail2ban, you can also analyze logs (find some pattern) to detect the evildoers (initial sender), and write custom RE for fail2ban to ban them for example after 10 attempts in 20 minutes (or add it to ignore list for bounce messages in postfix)... so you'd firstly send X bounces, but hereafter it'd ban the recidive IPs, what could also help to reduce the flood significantly.
An last but not least, check your config (follow best practices for it) and set up at least MX/SPF records, DKIM signing/verification and DMARC policies.
According to Etherscan i have 7.5 Ether, but when I execute eth.getBalance(eth.accounts[0]) inside the Javascript console it returns always "0"
this is how I am connecting geth to rinkeby (is running for more than 24 hours)
geth --rinkeby
this is the status of the sync
λ geth --rinkeby attach ipc:\\.\pipe\geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.9.9-stable-01744997/windows-amd64/go1.13.4
coinbase: 0x7f9153e8fe06c4b3eb10e8457c20d0559921de5c
at block: 0 (Wed, 12 Apr 2017 16:59:06 CEST)
datadir: C:\Users\andre_000\AppData\Local\Ethereum\rinkeby
modules: admin:1.0 clique:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
> eth.syncing
{
currentBlock: 5746334,
highestBlock: 5746402,
knownStates: 32641057,
pulledStates: 32636964,
startingBlock: 5746304
}
> eth.getBalance(eth.accounts[0])
0
> eth.getBalance(eth.coinbase)
0
> web3.fromWei(eth.getBalance(eth.coinbase));
0
> eth.getBalance("0x7f9153e8fe06c4b3eb10e8457c20d0559921de5c")
0
> eth.blockNumber
0
du -h
30G
It sounds like geth is not yet synced up.
Please type this into your geth console:
eth.getBlock("latest").number
As of this post, you should get 5757199 or higher.
"Syncing Ethereum is a pain point", as says Péter Szilágyi (team lead of ethereum).
From https://github.com/ethereum/go-ethereum/issues/16875:
The current default mode of sync for Geth is called fast sync. Instead
of starting from the genesis block and reprocessing all the
transactions that ever occurred (which could take weeks), fast sync
downloads the blocks, and only verifies the associated proof-of-works.
Downloading all the blocks is a straightforward and fast procedure and
will relatively quickly reassemble the entire chain.
Many people falsely assume that because they have the blocks, they are
in sync. Unfortunately this is not the case, since no transaction was
executed, so we do not have any account state available (ie. balances,
nonces, smart contract code and data). These need to be downloaded
separately and cross checked with the latest blocks. This phase is
called the state trie download and it actually runs concurrently with
the block downloads; alas it take a lot longer nowadays than
downloading the blocks.
So, what's the state trie? In the Ethereum mainnet, there are a ton of
accounts already, which track the balance, nonce, etc of each
user/contract. The accounts themselves are however insufficient to run
a node, they need to be cryptographically linked to each block so that
nodes can actually verify that the account's are not tampered with.
This cryptographic linking is done by creating a tree data structure
above the accounts, each level aggregating the layer below it into an
ever smaller layer, until you reach the single root. This gigantic
data structure containing all the accounts and the intermediate
cryptographic proofs is called the state trie.
Ok, so why does this pose a problem? This trie data structure is an
intricate interlink of hundreds of millions of tiny cryptographic
proofs (trie nodes). To truly have a synchronized node, you need to
download all the account data, as well as all the tiny cryptographic
proofs to verify that noone in the network is trying to cheat you.
This itself is already a crazy number of data items. The part where it
gets even messier is that this data is constantly morphing: at every
block (15s), about 1000 nodes are deleted from this trie and about
2000 new ones are added. This means your node needs to synchronize a
dataset that is changing 200 times per second. The worst part is that
while you are synchronizing, the network is moving forward, and state
that you begun to download might disappear while you're downloading,
so your node needs to constantly follow the network while trying to
gather all the recent data. But until you actually do gather all the
data, your local node is not usable since it cannot cryptographically
prove anything about any accounts.
If you see that you are 64 blocks behind mainnet, you aren't yet
synchronized, not even close. You are just done with the block
download phase and still running the state downloads. You can see this
yourself via the seemingly endless Imported state entries [...] stream
of logs. You'll need to wait that out too before your node comes truly
online.
the zone does not have enough resources available to fulfil the request/ the resource is not ready
I failed to start my instance (through the web browser), it gave me the error:
"The zone 'projects/XXXXX/zones/europe-west2-c' does not have enough resources available to fulfill the request. Try a different zone, or try again later."
I thought it might be the quota problem at first, after checking my quota, it showed all good. Actually, I listed the available zones, europe-west2-c was available, but I still gave a shot to move the zone. Then I tried "gcloud compute instances move XXXX --zone europe-west2-c --destination-zone europe-west2-c", however, it still failed, popped up the error:
"ERROR: (gcloud.compute.instances.move) Instance cannot be moved while in state: TERMINATED"
Okay, terminated... then I tried to restart it by "gcloud compute instances reset XXX", the error showed in the way:
ERROR: (gcloud.compute.instances.reset) Could not fetch resource: - The resource 'projects/XXXXX/zones/europe-west2-c/instances/XXX' is not ready
I searched the error, some people solved this problem by deleting the disk. While I don't want to wipe the memory, how could I solve this problem?
BTW, I only have one instance, with one persistent disk attached.
Its recommended to deploy and balance your workload across multiple zones or regions1 to reduce the likelihood of an outage, by building resilient and scalable architectures.
If you want an immediate solution, create a snapshot 2, then create an instance from the snapshot with different zone or region 3.
After migrating it you are still experiencing the same issue, I suggest to contact GCP support4.
> w3.eth.syncing
AttributeDict({
'currentBlock': 5787386,
'highestBlock': 5787491,
'knownStates': 138355583,
'pulledStates': 138341120,
'startingBlock': 5787335,
})
> w3.eth.blockNumber
0
I had done full sync but blocknumber is always 0.
Peter, the lead geth developer, posted a thorough response to this question here: Block number is always zero with fast syncmode. To add slightly more color to carver's answer, while you may have received the most recent block headers (eth.currentBlock), your node probably still has a lot of work remaining to download the entire state tree. To quote Peter:
Many people falsely assume that because they have the blocks, they are
in sync. Unfortunately this is not the case, since no transaction was
executed, so we do not have any account state available (ie. balances,
nonces, smart contract code and data). These need to be downloaded
separately and cross checked with the latest blocks. This phase is
called the state trie download and it actually runs concurrently with
the block downloads; alas it take a lot longer nowadays than
downloading the blocks
This is the same situation as Why is my ether balance 0 in geth, even though the sync is nearly complete? but with a slightly different "symptom".
To quote the important bit:
geth --fast has an interesting effect: geth cannot provide any information about accounts or contracts until the sync is fully complete.
Try querying the balance again after eth.syncing returns false.
Note that in addition to accounts and contracts, you also cannot retrieve any information about blocks until the sync is complete.
When your sync is fully complete, syncing will return false, like:
> w3.eth.syncing
False