when the currentBlock becomes close to the highestBlock, it stop growing, and the highestBlock begin to grow. A while later, the currentBlock begin growing again.
I run geth with command geth --rinkeby --fast.
the highestBlock on my geth is very close to the actual number on https://www.rinkeby.io/#faucet.
> eth.syncing
{
currentBlock: 2401750,
highestBlock: 2401826,
knownStates: 14219701,
pulledStates: 14205841,
startingBlock: 2401554
}
> eth.blockNumber
0
logs below, seems normal:
INFO [06-04|15:34:52] Imported new state entries count=621 elapsed=4.093ms processed=14288823 pending=12362 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:34:56] Imported new block headers count=1 elapsed=713.868µs number=2401841 hash=db818c…70c969 ignored=0
INFO [06-04|15:34:57] Imported new state entries count=1388 elapsed=9.091ms processed=14290211 pending=12354 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:00] Imported new state entries count=768 elapsed=9.649ms processed=14290979 pending=11944 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:02] Imported new state entries count=607 elapsed=4.707ms processed=14291586 pending=11757 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:05] Imported new state entries count=768 elapsed=5.867ms processed=14292354 pending=11629 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:07] Imported new state entries count=601 elapsed=4.242ms processed=14292955 pending=11759 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:09] Imported new state entries count=601 elapsed=4.924ms processed=14293556 pending=11479 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:09] Imported new block headers count=1 elapsed=711.566µs number=2401842 hash=39a2d8…5318ec ignored=0
INFO [06-04|15:35:10] Imported new state entries count=384 elapsed=3.093ms processed=14293940 pending=11375 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:11] Imported new state entries count=384 elapsed=2.660ms processed=14294324 pending=11365 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:13] Imported new state entries count=601 elapsed=5.337ms processed=14294925 pending=11094 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:17] Imported new state entries count=985 elapsed=6.948ms processed=14295910 pending=11024 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:20] Imported new state entries count=602 elapsed=4.317ms processed=14296512 pending=10940 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:25] Imported new state entries count=602 elapsed=4.380ms processed=14297114 pending=10973 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:25] Imported new block headers count=1 elapsed=469.834µs number=2401843 hash=e8d3a7…152487 ignored=0
INFO [06-04|15:35:25] Imported new state entries count=384 elapsed=2.758ms processed=14297498 pending=11062 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:28] Imported new state entries count=592 elapsed=5.524ms processed=14298090 pending=11015 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:31] Imported new state entries count=1210 elapsed=203.329ms processed=14299300 pending=10477 retry=0 duplicate=6543 unexpected=9538
INFO [06-04|15:35:37] Imported new state entries count=1033 elapsed=1.656ms processed=14300333 pending=10590 retry=0 duplicate=6543 unexpected=9538
I think I've waited long enough, but the currentBlock just can't reach the highestBlock, even they are very close.
Is this common?
syncing finished. Theses numbers do not mean everything, just wait.
There could still be a lot of states to process, even after the block synchronization was completed (i.e. when currentBlock almost reached highestBlock).
There is no clear way to know the total number of states:
pulledStates is the number of state trie entries already downloaded, and
knownStates is the total number of state trie entries known about.
Synchronization is considered complete when a node has downloaded (pulled) all the states (known and unknown), until then it returns 0 to eth.blockNumber and eth.getBalance.
Geth needs to sync state and blocks.
There are around 81M state entries atm.
The folder size is 28GB after full sync.
You will have to be patient to sync a node.
It took me 60 hours to sync Rinkeby in fast mode. There were 125M state entries and the folder size was 38GB after synchronization. With time, both these numbers will grow.
You can type eth.syncing in Geth console. If you get 'False' as output, it means that syncing is finished. Otherwise you'll get various details about the blocks and the states.
By typing eth.blockNumber you will get the current block number. If the output is 0 then the syncing is not yet complete.
Here is the image when syncing was complete(all the states were pulled) and I started downloading the chain segments.
Synchronized Rinkeby
I wrote a tiny python script to overview the process. It's here https://github.com/hayorov/ethereum-sync-mertics
My output:
2019-05-06 01:00:32 avg: 1827 max: 1938 min: 1378 states/s remain: 136604075 states 4 peers eta# 20:46:28.165828
2019-05-06 01:00:37 avg: 1864 max: 1938 min: 1378 states/s remain: 136595500 states 3 peers eta# 20:21:14.951050
2019-05-06 01:00:42 avg: 1791 max: 1938 min: 1378 states/s remain: 136583359 states 3 peers eta# 21:11:16.481006
2019-05-06 01:00:48 avg: 1742 max: 1938 min: 1378 states/s remain: 136580287 states 3 peers eta# 21:46:35.797305
2019-05-06 01:00:53 avg: 1721 max: 1938 min: 1378 states/s remain: 136575694 states 3 peers eta# 22:03:01.154434
2019-05-06 01:00:58 avg: 1682 max: 1938 min: 1378 states/s remain: 136569043 states 4 peers eta# 22:33:15.402442
2019-05-06 01:01:03 avg: 1698 max: 1938 min: 1378 states/s remain: 136564293 states 3 peers eta# 22:20:27.458747
Related
I'm using AMD Ryzen 7 2700x CPU, 64Gb memory 1Tb SSD, 100Mb Internet downspeed configuration to create a Ethereum node. I'm running the geth synchmode "fast" command to build a node but it seems to never catch up ! What numbers should I be looking at to see if it ever will. I've read https://github.com/ethereum/go-ethereum/issues/20962 so I know that block count isn't really what I should look for ? Is there something else whose rate of change I should monitor to see if I'm catching up ? e.g. pulled states in the output from the eth.synching command ? or the pending number in the log entry detailing the Imported new state entries ?
My geth command:
geth --syncmode "fast" --cache=4096 -llow-insecure-unlock --http --datadir /crypto/ethereum/mainnet --keystore /crypto/ethereum/keystore --signer=/crypto/ethereum/clef.ipc --maxpeers 25 2>/crypto/ethereum/mainnet_sync_r5.log
Output from eth.synching:
eth.syncing
{ currentBlock: 12168062,
highestBlock: 12168171,
knownStates: 183392940,
pulledStates: 183343841,
startingBlock: 12160600 }
last few lines from my command log:
INFO [04-03|11:16:04.511] Imported new state entries count=1350 elapsed=7.456ms
processed=183847128 pending=53248 trieretry=93 coderetry=0 duplicate=2969 unexpected=4186274
INFO [04-03|11:16:04.674] Imported new state entries count=0
elapsed=4.487ms processed=183847128 pending=52401 trieretry=93 coderetry=0 duplicate=2969 unexpected=4186367
INFO[04-03|11:16:04.681] Imported new state entries
count=1059 elapsed=6.784ms processed=183848187 pending=52401
trieretry=93 coderetry=0 duplicate=2969 unexpected=4186367
INFO [04-03|11:16:04.880] Imported new state entries
count=1152 elapsed=5.161ms processed=183849339 pending=53150
trieretry=0 coderetry=0 duplicate=2969 unexpected=4186367
INFO [04-03|11:16:05.003] Imported new state entries
count=1152 elapsed=5.906ms processed=183850491 pending=52394
trieretry=0 coderetry=0 duplicate=2969 unexpected=4186367
I have a spark job that, once done, uploads output to S3 in json format.
dataframe.write.mode(SaveMode.Overwrite).json(file_path)
Yesterday though, one json it uploaded was incomplete, rest jsons looked fine as usual. Below is the snippet of logs about that one file. I have extracted these from 2 log files out of the many that were generated for that job run.
log file 1
20/04/07 13:12:41 INFO MultipartUploadOutputStream: uploadPart: partNum 1 of 's3://bucket/part-00072-50d3246e-e18c-4058-9e1c-ad714305c92f-c000.json' from local file '/mnt/s3/emrfs-5360609960688228490/0000000000', 134217728 bytes in 5577 ms, md5: qg7f22UwVchHRejYe+41GQ== md5hex: aa0edfdb653055c84745e8d87bee3519
20/04/07 13:12:43 INFO MultipartUploadOutputStream: uploadPart: partNum 2 of 's3://bucket/part-00072-50d3246e-e18c-4058-9e1c-ad714305c92f-c000.json' from local file '/mnt/s3/emrfs-5360609960688228490/0000000001', 69266128 bytes in 1322 ms, md5: hOJmAWIoAMs2EtyBCuUw2g== md5hex: 84e26601622800cb3612dc810ae530da
20/04/07 13:12:44 INFO Executor: Finished task 72.0 in stage 549.0 (TID 413542). 212642 bytes result sent to driver
20/04/07 13:12:44 INFO DefaultMultipartUploadDispatcher: Completed multipart upload of 2 parts 203483856 bytes
20/04/07 13:12:44 INFO SparkHadoopMapRedUtil: No need to commit output of task because needsTaskCommit=false: attempt_20200407131212_0549_m_000072_0
log file 2
20/04/07 13:12:37 INFO Executor: Running task 72.1 in stage 549.0 (TID 413637)
20/04/07 13:12:44 INFO Executor: Executor is trying to kill task 72.1 in stage 549.0 (TID 413637), reason: another attempt succeeded
20/04/07 13:12:44 INFO MultipartUploadOutputStream: uploadPart: partNum 2 of 's3://bucket/part-00072-50d3246e-e18c-4058-9e1c-ad714305c92f-c000.json' from local file '/mnt/s3/emrfs-3565489585808272492/0000000001', 1 bytes in 43 ms, md5: y7GE3Y4FyXCeXcrtqgSVzw== md5hex: cbb184dd8e05c9709e5dcaedaa0495cf
20/04/07 13:12:45 INFO MultipartUploadOutputStream: uploadPart: partNum 1 of 's3://bucket/part-00072-50d3246e-e18c-4058-9e1c-ad714305c92f-c000.json' from local file '/mnt/s3/emrfs-3565489585808272492/0000000000', 134217728 bytes in 1395 ms, md5: qg7f22UwVchHRejYe+41GQ== md5hex: aa0edfdb653055c84745e8d87bee3519
20/04/07 13:12:46 INFO DefaultMultipartUploadDispatcher: Completed multipart upload of 2 parts 134217729 bytes
20/04/07 13:12:46 INFO Executor: Executor interrupted and killed task 72.1 in stage 549.0 (TID 413637), reason: another attempt succeeded
As can be seen from the logs, the second task wanted to stop itself once it came to know that first one has uploaded the file. But it wasn't able to rollback entirely and in turn ended up overwriting with partial data(1 byte). That json now looks like this
{"key": "v}
instead of
{"key": "value"}
and this causes the reader of json to throw exception. I have tried to search this issue but can not see anyone ever posting about it. Has anyone faced this issue here? Is this a bug in spark? How can I overcome this?
Below is my Test plan to read Data from Multiple CSV file. I wants to test Scenario like
1. 10 users performed operation on 100 documents. Idealy each user should get 10 documents and perfromed the operation on it.
TestPlan
Thread Group
While controller
LoginUserDataConfig
LoginRequestRecordingController
HTTPLoginRequest
DocumentOperationRecordingController
DocIDList
HttpSaveRequest
But with above plan It is taking only 10 document and stop the process. I run the script by changing CSVDataConfigu setting like Shared Mode to All Thread\Current Thread but not getting desired output.
Can any one correct my test plan.
Thread Settings:
Number of Thread: 10
Ramp-Up Period: 2
loop count: 1
LoginUserDataConfig Settings:
Allowed Quoted Data: False
Recycle on EOF? False
Stop Thread on EOF: True
Sharing mode: Current Thread Group
DocIDList Settings:
Allowed Quoted Data: False
Recycle on EOF? False
Stop Thread on EOF: True
Sharing mode: Current Thread Group
You should mark loop count as forever and it will continue until End Of File of CSV (100 IDs)
I have understanding wih MySQL Cluster.
I have one table:
38 fields total
22 fields are described as 22 indexes (field type: int)
Other fields double and bigint values
The table doesn't have defined Primary Key
My Environment (10 nodes):
data nodes: 8 (AWS EC2 instances, m4.xlarge 16GB RAM, 750GB HDD)
management nodes: 2 (AWS EC2 instances, m4.2xlarge 32GB RAM)
sql nodes: 2 (the same VM as in management nodes)
MySQL Cluster settings (config.ini) are set to:
[NDBD DEFAULT]
NoOfReplicas=2
ServerPort=2200
Datadir=/storage/data/mysqlcluster/
FileSystemPathDD=/storage/data/mysqlcluster/
BackupDataDir=/storage/data/mysqlcluster//backup/
#FileSystemPathUndoFiles=/storage/data/mysqlcluster/
#FileSystemPathDataFiles=/storage/data/mysqlcluster/
DataMemory=9970M
IndexMemory=1247M
LockPagesInMainMemory=1
MaxNoOfConcurrentOperations=100000
MaxNoOfConcurrentTransactions=16384
StringMemory=25
MaxNoOfTables=4096
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfAttributes=24576
MaxNoOfTriggers=14336
### Params for REDO LOG
FragmentLogFileSize=256M
InitFragmentLogFiles=SPARSE
NoOfFragmentLogFiles=39
RedoBuffer=64M
TransactionBufferMemory=8M
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=100
TimeBetweenEpochsTimeout=0
### Params for LCP
MinDiskWriteSpeed=10M
MaxDiskWriteSpeed=20M
MaxDiskWriteSpeedOtherNodeRestart=50M
MaxDiskWriteSpeedOwnRestart=200M
TimeBetweenLocalCheckpoints=20
### Heartbeating
HeartbeatIntervalDbDb=15000
HeartbeatIntervalDbApi=15000
### Params for setting logging
MemReportFrequency=30
BackupReportFrequency=10
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15
### Params for BACKUP
BackupMaxWriteSize=1M
BackupDataBufferSize=24M
BackupLogBufferSize=16M
BackupMemory=40M
### Params for ODIRECT
#Reports indicates that odirect=1 can cause io errors (os err code 5) on some systems. You must test.
#ODirect=1
### Watchdog
TimeBetweenWatchdogCheckInitial=60000
### TransactionInactiveTimeout - should be enabled in Production
TransactionInactiveTimeout=60000
### New 7.1.10 redo logging parameters
RedoOverCommitCounter=3
RedoOverCommitLimit=20
### REALTIME EXTENSIONS
#RealTimeScheduler=1
### REALTIME EXTENSIONS FOR 6.3 ONLY
#SchedulerExecutionTimer=80
#SchedulerSpinTimer=40
### DISK DATA
SharedGlobalMemory=20M
DiskPageBufferMemory=64M
BatchSizePerLocalScan=512
After importing 75M records to my table I get the error (The table 'test_table' is full) and can not import data any more to the table.
I don't undersdtand why it is so.
I look at information_scheme and can see that avg_record_size is 244. The full table size is: ~19G
Also if I look at DataMemory used on each data node I see: ~94%.
IndexMemory used is: ~22%
But I have 8 data nodes with DataMemory total with *8*9970M = 80GB*
My table is 19GB only. So even I have replicas. The memory used muse be: 19*2=38GB.
Could somebody explain me what is the situation. And how can I configure the Cluster and import max possible records.
The full table in production will have: 33 Billion records.
For tests on the given cluster I need to test 100M and 1B data sets.
Thanks.
I would like to connect my linux board to an access point (i.e mobile phone) by using wpa_supplicant.
My mobile phone ap is configured with WPA (AES) security.
I modified the wpa_supplicant.conf as follow :
ctrl_interface=DIR=/var/run/wpa_supplicant
network={
ssid="HTC"
psk="mypasswd"
scan_ssid=1
proto=WPA2
key_mgmt=WPA-PSK
pairwise=CCMP TKIP
group=CCMP TKIP
priority=5
}
I set to up the mlan0 and launch wpa_supplicant as follow
root#root:~# wpa_supplicant -i mlan0 -c /etc/wpa_supplicant.conf
Successfully initialized wpa_supplicant
root#root:~# rfkill: Cannot open RFKILL control device
[ 2113.867283] IPv6: ADDRCONF(NETDEV_UP): mlan0: link is not ready
[ 2113.999385] wlan: mlan0 START SCAN
mlan0: CTRL-EVENT-SCAN-STARTED
[ 2116.924881] wlan: SCAN COMPLETED: scanned AP count=9
mlan0: Trying to associate with 84:7a:88:50:b0:a7 (SSID='HTC' freq=2437 MHz)
[ 2116.954134] ASSOC_RESP: Association Failed, status code = 17, error = 0x411, a_id = 0x0
[ 2116.962280] IOCTL failed: 9a8db800 id=0x20000, sub_id=0x20001 action=1, status_code=0x4110011
mlan0: CTRL-EVENT-ASSOC-REJECT status_code=1
[ 2117.073403] wlan: mlan0 START SCAN
mlan0: CTRL-EVENT-SCAN-STARTED
...
But connection is never established.
Just from looking at the wpa_supplicant output, it is clear that there are no problems with your interfaces mlan/wlan or your IP, as suggested by other responders.
Just to explain the output:
[ 2113.999385] wlan: mlan0 START SCAN
mlan0: CTRL-EVENT-SCAN-STARTED
[ 2116.924881] wlan: SCAN COMPLETED: scanned AP count=9
mlan0: Trying to associate with 84:7a:88:50:b0:a7 (SSID='HTC' freq=2437 MHz)
The above means that:
wpa_supplicant launched successfully.
wpa_supplicant started a wireless scan of nearby BSS's (Basic Service Set).
wpa_supplicant found 9 nearby BSS's, one of them is 'HTC'.
wpa_supplicant started an association sequence with 'HTC' on 2437 Mhz frequency i.e. channel 6.
So, what went wrong???
[ 2116.954134] ASSOC_RESP: Association Failed, status code = 17, error = 0x411, a_id = 0x0
You got error code 17 - Association denied because AP is unable to handle additional associated stations. Will happen if you run out of AIDs on the AP(Access Point).
One of the bellow is probably true:
Your AP is a Hotspot with limited number of stations or you are using an inferior AP that doesn't support enough stations.
You tried to connect to very busy Access Point
So, my solutions are:
Try to configure your AP to enable larger number of stations.
Try to connect to different network to see if the problem reproduces.
If your AP configurations are OK and it is not very busy (low number of associated stations), this might be indicative of a problem. You won't be able to connect any new station. I'd suggest AP reboot.
try to give static IP to your board on the same subnet as your phone having.
rfkill: Cannot open RFKILL control device
I got the same error message when
forgot to plug in the WiFi dongle
the interface specified in -i flag does not exist.
(And maybe it's not "mlan0", it's "wlan0"?)
In the second case, try to modify your command from
wpa_supplicant -i mlan0 -c /etc/wpa_supplicant.conf
to
wpa_supplicant -i wlan0 -c /etc/wpa_supplicant.conf