I have an issue with Geth, I want to compile and deploy a smart contract, and in order to do that, I have added ether to my wallet.
We can verify the transaction here : https://etherscan.io/tx/0x0b3247c2c1d1f26411486ad3ee4fd47e2a8b71de56af0b0fe6759586dd18f6b5
where we can see the transaction is in the 4 679 731th block
Here, we can verify the amount of ether in the wallet : https://etherscan.io/address/0x704e2b488674afa69069a165d3c8a80a27c30d6f which is 0,075 ether.
Then,
I have built a virtual machine with an vanilla Ubuntu, install geth (v1.7.0-stable-6c6c7b2a) and try to sync from the 10-29-2017 to the 12-20-2017.
I launch geth with : geth console
In my research, I have found two commands which can determine the progress of the sync :
var sync = web3.eth.syncing;sync;
give :
{
currentBlock: 1144835,
highestBlock: 1720228,
knownStates: 33418,
pulledStates: 25707,
startingBlock: 1144067
}
eth.getBlock("latest").number
give :
0
I used the following command to check how many ether an account have :
web3.fromWei(eth.getBalance("0x704e2b488674aFa69069A165D3C8a80A27C30D6f"), "ether")
give :
0
So in two month, I'm not sync, and my geth don't know how many ether I have on my account.
I have apparently sync to the 1 144 835th block, and the transaction is in the 4 679 731th block, so I think it's normal, but in two months ?
Do I have to wait 8 months to start using Geth ? It's not acceptable.
So I have built a second virtual machine on 12-29-2017 with the same vanilla Ubuntu, install geth (1.7.3-stable-4bb3c89d) and try to sync.
The difference with the first : I launch with the fast syncmode : geth --fast --cache=1024 console.
Now, the commands
var sync = web3.eth.syncing;sync;
give :
{
currentBlock: 3872910,
highestBlock: 4884396,
knownStates: 3180,
pulledStates: 2379,
startingBlock: 3865878
}
eth.getBlock("latest").number
give :
0
and
web3.fromWei(eth.getBalance("0x704e2b488674aFa69069A165D3C8a80A27C30D6f"), "ether")
give :
0
So in 12 days, I have sync 4x more, but in the 4 last days, I have the message :
WARN [01-10|01:01:40] Rolled back headers count=2048 header=3894230->3892182 fast=3872910->3872910 block=0->0
every time and it doesn't sync more.
So I tried a third time on my personal computer, a windows 8.1, install geth (1.7.3-stable-4bb3c89d) and try to sync from the 01-07-2018
with command line : geth.exe --syncmode "full" console
Now, the commands
var sync = web3.eth.syncing;sync;
give :
{ currentBlock: 4881711, highestBlock: 4884507, knownStates:
34771, pulledStates: 26470, startingBlock: 4880214 }
eth.getBlock("latest").number
give :
0
and
web3.fromWei(eth.getBalance("0x704e2b488674aFa69069A165D3C8a80A27C30D6f"), "ether")
give :
0
So I have reached the targeted block (4 679 731) in one day, but geth always states that I have 0 ether in my account ?
My question is : how can I sync so I can start to work with SmartContract ?
When I search on the web, the only anwser I can read is to wait.
I don't want to read this anwser anymore, or tell me a real method that can show me the real progress.
Syncing can be very frustrating. However, you don't need to constantly rebuild VMs to try to get a fully sync'ed node. Usually, you can just stop and restart geth and it'll find new peers and resume downloading the blockchain from where it left off. Starting from a blank slate, it's not uncommon for it to take close to a day to fully sync mainnet.
You do have alternatives, though. If you're just looking to develop smart contracts, you can use Truffle/Ganache (formerly, TestRPC) to run a local private dev blockchain. Remix is a development tool which can connect to TestRPC or you can execute smart contracts within it's own internal VM. If you want to connect to mainnet or testnet, you can use the MetaMask plugin which doesn't require you to have a sync'ed local node (Remix can connect through MetaMask as well). Finally, you can try using geth --light to enable light syncing mode.
Related
I'm trying to start Ethereum, so I created my wallet first using metamask and I added to geth using private key with this command.
geth account import private_key.txt
After that I run the mining command using geth --mine, so my question is : am I really mining to my correct wallet ?
OPTION 1:
You can set the account your Ethereum miner mines to by running the following in the geth console:
miner.setEtherbase('yourethaddress')
You can also set a local address to mine to using:
miner.setEtherbase(eth.accounts[2])
Replace '2' with the number of your account.
OPTION 2: When starting your geth node you can use the --etherbase flag.
geth --rpc --etherbase 0xC95767AC46EA2A9162F0734651d6cF17e5BfcF10
Using your ETH public address.
More information here: https://geth.ethereum.org/docs/interface/mining
I am trying to run a beacon-chain for Ethereum2.0 in the pyrmont testnet with Prysm and Besu.
I run the ETH1 node with the command :
besu --network=goerli --data-path=/root/goerliData --rpc-http-enabled
This command is working and download the entire blockchain, then run properly.
But when I launch :
./prysm.sh beacon-chain --http-web3provider=localhost:8545 --pyrmont
I get :
Verified /root/prysm/dist/beacon-chain-v1.0.0-beta.3-linux-amd64 has been signed by Prysmatic Labs.
Starting Prysm beacon-chain --http-web3provider=localhost:8545 --pyrmont
[2020-11-18 14:03:06] WARN flags: Running on Pyrmont Testnet
[2020-11-18 14:03:06] INFO flags: Using "max_cover" strategy on attestation aggregation
[2020-11-18 14:03:06] INFO node: Checking DB database-path=/root/.eth2/beaconchaindata
[2020-11-18 14:03:08] ERROR main: database contract is xxxxxxxxxxxx3fdc but tried to run with xxxxxxxxxxxx6a8c
I tried to delete the previous data folder /root/goerliData and re-download the blockchain but nothing changed...
Why does the database contract didn't change and what should I do ?
Thanks :)
The error means that you have an existing database for another network, probably medalla.
Try starting your beacon node with the flag --clear-db next time, and you'll see it the error disappear and start syncing Pyrmont.
My first problem looked like this:
Writing objects: 60% (9/15)
It freezed there for some time with very low upload speed (in kb/s), then, after long time, gave this message:
fatal: the remote end hung up unexpectedly
Everything up-to-date
I found something what seemed to be a solution:
git config http.postBuffer 524288000
This created a new problem that looks like this:
MacBook-Pro-Liana:LC | myWebsite Liana$ git config http.postBuffer 524288000
MacBook-Pro-Liana:LC | myWebsite Liana$ git push -u origin master
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 4 threads
Compressing objects: 100% (14/14), done.
Writing objects: 100% (15/15), 116.01 MiB | 25.16 MiB/s, done.
Total 15 (delta 2), reused 0 (delta 0)
error: RPC failed; curl 56 LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
Please help, I have no idea what’s going on...
First, Git 2.25.1 made it clear that:
Users in a wide variety of situations find themselves with HTTP push problems.
Oftentimes these issues are due to antivirus software, filtering proxies, or other man-in-the-middle situations; other times, they are due to simple unreliability of the network.
This works for none of the aforementioned situations and is only useful in a small, highly restricted number of cases: essentially, when the connection does not properly support HTTP/1.1.
Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes.
Second, it depends on your actual remote (GitHub? GitLab? BitBucket? On-premise server). Said remote server might have an incident in progress.
I've been developing an application for some weeks, and it's been running in a OpenShift small gear with DIY 0.1 + PostgreSQL cartridges for several days, including ~5 new deployments. Everything was ok and a new deploy stopped and started everything in seconds.
Nevertheless today pushing master as usual stops the cartridge and it won't start. This is the trace:
Counting objects: 2688, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (1930/1930), done.
Writing objects: 100% (2080/2080), 10.76 MiB | 99 KiB/s, done.
Total 2080 (delta 1300), reused 13 (delta 0)
remote: Stopping DIY cartridge
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Logging in with ssh and running the start action hook manually fails because database is stopped. Restarting the gear makes everything work again.
The failing deployment has nothing to do with it, since it only adds a few lines of code, nothing about configuration or anything that might break the boot.
Logs (at $OPENSHIFT_LOG_DIR) reveal nothing. Quota usage seems fine:
Cartridges Used Limit
---------------------- ------ -----
diy-0.1 postgresql-9.2 0.6 GB 1 GB
Any suggestions about what could I check?
Oh, dumb mistake. My last working deployment involved a change in the binary name, which now matches the gear name. stop script, with ps grep and so on was wrong, not killing only the application but also the connection. Changing it fixed the issue.
Solution inspired by this blogpost.
I found there are still failed request when the traffic is high using command like this
haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
to hot reload the updated config file.
Here below is the presure testing result using webbench :
/usr/local/bin/webbench -c 10 -t 30 targetHProxyIP:1080
Webbench – Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET targetHProxyIP:1080
10 clients, running 30 sec.
Speed=70586 pages/min, 13372974 bytes/sec.
**Requests: 35289 susceed, 4 failed.**
I run command
haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
several times during the pressure testing.
In the haproxy documentation, it mentioned
They will receive the SIGTTOU
611 signal to ask them to temporarily stop listening to the ports so that the new
612 process can grab them
so there is a time period that the old process is not listening on the PORT(say 80) and the new process haven’t start to listen to the PORT (say 80), and during this specific time period, it will cause the NEW connections failed, make sense?
So is there any approach that makes the configuration reload of haproxy that will not impact both existing connections and new connections?
On recent kernels where SO_REUSEPORT is finally implemented (3.9+), this dead period does not exist anymore. While a patch has been available for older kernels for something like 10 years, it's obvious that many users cannot patch their kernels. If your system is more recent, then the new process will succeed its attempt to bind() before asking the previous one to release the port, then there's a period where both processes are bound to the port instead of no process.
There is still a very tiny possibility that a connection arrived in the leaving process' queue at the moment it closes it. There is no reliable way to stop this from happening though.