According to Etherscan i have 7.5 Ether, but when I execute eth.getBalance(eth.accounts[0]) inside the Javascript console it returns always "0"
this is how I am connecting geth to rinkeby (is running for more than 24 hours)
geth --rinkeby
this is the status of the sync
λ geth --rinkeby attach ipc:\\.\pipe\geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.9.9-stable-01744997/windows-amd64/go1.13.4
coinbase: 0x7f9153e8fe06c4b3eb10e8457c20d0559921de5c
at block: 0 (Wed, 12 Apr 2017 16:59:06 CEST)
datadir: C:\Users\andre_000\AppData\Local\Ethereum\rinkeby
modules: admin:1.0 clique:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
> eth.syncing
{
currentBlock: 5746334,
highestBlock: 5746402,
knownStates: 32641057,
pulledStates: 32636964,
startingBlock: 5746304
}
> eth.getBalance(eth.accounts[0])
0
> eth.getBalance(eth.coinbase)
0
> web3.fromWei(eth.getBalance(eth.coinbase));
0
> eth.getBalance("0x7f9153e8fe06c4b3eb10e8457c20d0559921de5c")
0
> eth.blockNumber
0
du -h
30G
It sounds like geth is not yet synced up.
Please type this into your geth console:
eth.getBlock("latest").number
As of this post, you should get 5757199 or higher.
"Syncing Ethereum is a pain point", as says Péter Szilágyi (team lead of ethereum).
From https://github.com/ethereum/go-ethereum/issues/16875:
The current default mode of sync for Geth is called fast sync. Instead
of starting from the genesis block and reprocessing all the
transactions that ever occurred (which could take weeks), fast sync
downloads the blocks, and only verifies the associated proof-of-works.
Downloading all the blocks is a straightforward and fast procedure and
will relatively quickly reassemble the entire chain.
Many people falsely assume that because they have the blocks, they are
in sync. Unfortunately this is not the case, since no transaction was
executed, so we do not have any account state available (ie. balances,
nonces, smart contract code and data). These need to be downloaded
separately and cross checked with the latest blocks. This phase is
called the state trie download and it actually runs concurrently with
the block downloads; alas it take a lot longer nowadays than
downloading the blocks.
So, what's the state trie? In the Ethereum mainnet, there are a ton of
accounts already, which track the balance, nonce, etc of each
user/contract. The accounts themselves are however insufficient to run
a node, they need to be cryptographically linked to each block so that
nodes can actually verify that the account's are not tampered with.
This cryptographic linking is done by creating a tree data structure
above the accounts, each level aggregating the layer below it into an
ever smaller layer, until you reach the single root. This gigantic
data structure containing all the accounts and the intermediate
cryptographic proofs is called the state trie.
Ok, so why does this pose a problem? This trie data structure is an
intricate interlink of hundreds of millions of tiny cryptographic
proofs (trie nodes). To truly have a synchronized node, you need to
download all the account data, as well as all the tiny cryptographic
proofs to verify that noone in the network is trying to cheat you.
This itself is already a crazy number of data items. The part where it
gets even messier is that this data is constantly morphing: at every
block (15s), about 1000 nodes are deleted from this trie and about
2000 new ones are added. This means your node needs to synchronize a
dataset that is changing 200 times per second. The worst part is that
while you are synchronizing, the network is moving forward, and state
that you begun to download might disappear while you're downloading,
so your node needs to constantly follow the network while trying to
gather all the recent data. But until you actually do gather all the
data, your local node is not usable since it cannot cryptographically
prove anything about any accounts.
If you see that you are 64 blocks behind mainnet, you aren't yet
synchronized, not even close. You are just done with the block
download phase and still running the state downloads. You can see this
yourself via the seemingly endless Imported state entries [...] stream
of logs. You'll need to wait that out too before your node comes truly
online.
Related
Let's suppose to have 100 sensors that send an attribute any second to Orion. How could I manage this massive data?
via batch operation (but I don't know if it can support them)
using an edge (to aggregate data) and sending to Orion (after 1 minute)
Thank you
Let’s consider 100 tps a high load for a given infrastructure (load and throughput must be always related to infrastructure and e2e scenarios).
The main problem you may encounter is not related to the update itself, Orion Context Broker and its fork Orion LD, can handle a lot of updates. The main problem in real/productive scenarios, like the ones handled by Orion Context Broker and NGSI v2, are the NOTIFICATIONS related to those UPDATES.
If you need a 1:1 (or even a 1:2 or 1:4) ratio between UPDATES:NOTIFICATIONS, for example you want to keep track of the history of every measure and also send the measures to the CEP for some post-processing, then it’s not only a matter of how many updates Orion may handle, but how many update-notifications the E2E can handle. If you got a slow notification endpoint Orion will saturate its notification queues and you will be losing notifications (not keeping track of those updates within en historic, or CEP…).
Batch updates are not helping on this since the UPDATE REQUEST SERVER is not the bottleneck and they are internally managed as single updates.
To alleviate this problem I should recommend you to enable NGSI V2 (only available in V2) flow control mechanism, so the update process may be automatically slowed down when the notification throughput requires so.
And of course, in any IoT scenario if you don’t need all the data the earlier you aggregate the better. So if your E2E doesn’t need to keep track of every single measure, data loggers are more than welcome.
For 100 sensors sending one update per second (did I understand that correctly?) ... that's nothing. The broker can handle 2-3 thousand updates per second running in a single core and with ~4 GB of RAM (mongodb needs about 3 times that).
And, if it's more (a lot more), then yes, the NGSI-LD API defines batch operations (for Create, Update, Upsert, and Delete of entities), and Orion-LD implements them all.
However, there's no batch op for attribute update. You'd need to use "batch update entity", the update mode (not replace). Check the NGSI-LD API spec for details.
On Windows 10, on my command prompt, I go
> geth --rinkeby
Which start to sync my node with the network
On another command prompt, I go
> geth --rinkeby attach ipc:\\.\pipe\geth.ipc
And then
> eth.syncing
which gives
{
currentBlock: 3500871,
highestBlock: 3500955,
knownStates: 25708160,
pulledStates: 25680474,
startingBlock: 3500738
}
As you can see, I am always behind from the highest block by about 80. I've heard this is normal for the testnet. I created an account on Rinkeby and requested ether via the faucet: https://faucet.rinkeby.io/. I also tried https://faucet.ropsten.be/ but can't get ether.
On the geth console, I can show my account which gives
> eth.accounts
["0x7bf0a466e7087c4d40211c0fa8aaf3011176b6c6"]
and viewing the balance I get:
eth.getBalance(eth.accounts[0])
I don't know if this is due to my node being 80 blocks behind the highest node...?
Edit: It may be worth adding that I created a symbolic link from my AppData/Roaming/Ethereum on my C drive to another folder on my D drive as I was running out of space. (Don't know if that effects my sync)
I guess you faced with a problem known as "not sync last 65 blocks"
Q: I'm stuck at 64 blocks behind mainnet?!
A: As explained above, you are not stuck, just finished with the block
download phase, waiting for the state download phase to complete too.
This latter phase nowadays take a lot longer than just getting the
blocks.
For more information https://github.com/ethereum/mist/issues/3760#issuecomment-390892894
Stop the geth and start again. It’s pretty normal to be behind the highest block. For ether, check on etherscan once if you actually received the ether from the faucet or not. That way you will know on what block height you have received your either from faucet. Then wait till your geth sync till that block. Also the best option would be to use something like Quicknode where you don’t need to be concerned of always keeping your machine running or waiting for hours before continuing with Development work. Yes they have a small nominal fee, but for the service they provide it’s pretty worth it.
> w3.eth.syncing
AttributeDict({
'currentBlock': 5787386,
'highestBlock': 5787491,
'knownStates': 138355583,
'pulledStates': 138341120,
'startingBlock': 5787335,
})
> w3.eth.blockNumber
0
I had done full sync but blocknumber is always 0.
Peter, the lead geth developer, posted a thorough response to this question here: Block number is always zero with fast syncmode. To add slightly more color to carver's answer, while you may have received the most recent block headers (eth.currentBlock), your node probably still has a lot of work remaining to download the entire state tree. To quote Peter:
Many people falsely assume that because they have the blocks, they are
in sync. Unfortunately this is not the case, since no transaction was
executed, so we do not have any account state available (ie. balances,
nonces, smart contract code and data). These need to be downloaded
separately and cross checked with the latest blocks. This phase is
called the state trie download and it actually runs concurrently with
the block downloads; alas it take a lot longer nowadays than
downloading the blocks
This is the same situation as Why is my ether balance 0 in geth, even though the sync is nearly complete? but with a slightly different "symptom".
To quote the important bit:
geth --fast has an interesting effect: geth cannot provide any information about accounts or contracts until the sync is fully complete.
Try querying the balance again after eth.syncing returns false.
Note that in addition to accounts and contracts, you also cannot retrieve any information about blocks until the sync is complete.
When your sync is fully complete, syncing will return false, like:
> w3.eth.syncing
False
I need to compile and run user-submitted scripts on my site, similar to what codepad and ideone do. How can I sandbox these programs so that malicious users don't take down my server?
Specifically, I want to lock them inside an empty directory and prevent them from reading or writing anywhere outside of that, from consuming too much memory or CPU, or from doing anything else malicious.
I will need to communicate with these programs via pipes (over stdin/stdout) from outside the sandbox.
codepad.org has something based on geordi, which runs everything in a chroot (i.e restricted to a subtree of the filesystem) with resource restrictions, and uses the ptrace API to restrict the untrusted program's use of system calls. See http://codepad.org/about .
I've previously used Systrace, another utility for restricting system calls.
If the policy is set up properly, the untrusted program would be prevented from breaking anything in the sandbox or accessing anything it shouldn't, so there might be no need put programs in separate chroots and create and delete them for each run. Although that would provide another layer of protection, which probably wouldn't hurt.
Some time ago I was searching for a sandbox solution to use in an automated assignment evaluation system for CS students. Much like everything else, there is a trade-off between the various properties:
Isolation and access control granularity
Performance and ease of installation/configuration
I eventually decided on a multi-tiered architecture, based on Linux:
Level 0 - Virtualization:
By using one or more virtual machine snapshots for all assignments within a specific time range, it was possible to gain several advantages:
Clear separation of sensitive from non-sensitive data.
At the end of the period (e.g. once per day or after each session) the VM is shutdown and restarted from the snapshot, thus removing any remnants of malicious or rogue code.
A first level of computer resource isolation: each VM has limited disk, CPU and memory resources and the host machine is not directly accessible.
Straight-forward network filtering: By having the VM on an internal interface, the firewall on the host can selectively filter the network connections.
For example, a VM intended for testing students of an introductory programming course could have all incoming and outgoing connections blocked, since students at that level would not have network programming assignments. At higher levels the corresponding VMs could e.g. have all outgoing connections blocked and allow incoming connection only from within the faculty.
It would also make sense to have a separate VM for the Web-based submission system - one that could upload files to the evaluation VMs, but do little else.
Level 1 - Basic cperating-system contraints:
On a Unix OS that would contain the traditional access and resource control mechanisms:
Each sandboxed program could be executed as a separate user, perhaps in a separate chroot jail.
Strict user permissions, possibly with ACLs.
ulimit resource limits on processor time and memory usage.
Execution under nice to reduce priority over more critical processes. On Linux you could also use ionice and cpulimit - I am not sure what equivalents exist on other systems.
Disk quotas.
Per-user connection filtering.
You would probably want to run the compiler as a slightly more privileged user; more memory and CPU time, access to compiler tools and header files e.t.c.
Level 2 - Advanced operating-system constraints:
On Linux I consider that to be the use of a Linux Security Module, such as AppArmor or SELinux to limit access to specific files and/or system calls. Some Linux distributions offer some sandboxing security profiles, but it can still be a long and painful process to get something like this working correctly.
Level 3 - User-space sandboxing solutions:
I have successfully used Systrace in a small scale, as mentioned in this older answer of mine. There several other sandboxing solutions for Linux, such as libsandbox. Such solutions may provide more fine-grained control over the system calls that may be used than LSM-based alternatives, but can have a measurable impact on performance.
Level 4 - Preemptive strikes:
Since you will be compiling the code yourself, rather than executing existing binaries, you have a few additional tools in your hands:
Restrictions based on code metrics; e.g. a simple "Hello World" program should never be larger than 20-30 lines of code.
Selective access to system libraries and header files; if you don't want your users to call connect() you might just restrict access to socket.h.
Static code analysis; disallow assembly code, "weird" string literals (i.e. shell-code) and the use of restricted system functions.
A competent programmer might be able to get around such measures, but as the cost-to-benefit ratio increases they would be far less likely to persist.
Level 0-5 - Monitoring and logging:
You should be monitoring the performance of your system and logging all failed attempts. Not only would you be more likely to interrupt an in-progress attack at a system level, but you might be able to make use of administrative means to protect your system, such as:
calling whatever security officials are in charge of such issues.
finding that persistent little hacker of yours and offering them a job.
The degree of protection that you need and the resources that you are willing to expend to set it up are up to you.
I am the developer of libsandbox mentioned by #thkala, and I do recommend it for use in your project.
Some additional comments on #thkala's answer,
it is fair to classify libsandbox as a user-land tool, but libsandbox does integrate standard OS-level security mechanisms (i.e. chroot, setuid, and resource quota);
restricting access to C/C++ headers, or static analysis of users' code, does NOT prevent system functions like connect() from being called. This is because user code can (1) declare function prototypes by themselves without including system headers, or (2) invoke the underlying, kernel-land system calls without touching wrapper functions in libc;
compile-time protection also deserves attention because malicious C/C++ code can exhaust your CPU with infinite template recursion or pre-processing macro expansion;
I need to set up a job/message queue with the option to set a delay for the task so that it's not picked up immediately by a free worker, but after a certain time (can vary from task to task). I looked into a couple of linux queue solutions (rabbitmq, gearman, memcacheq), but none of them seem to offer this feature out of the box.
Any ideas on how I could achieve this?
Thanks!
I've used BeanstalkD to great effect, using the delay option on inserting a new job to wait several seconds till the item becomes available to be reserved.
If you are doing longer-term delays (more than say 30 seconds), or the jobs are somewhat important to perform (abeit later), then it also has a binary logging system so that any daemon crash would still have a record of the job. That said, I've put hundreds of thousands of live jobs through Beanstalkd instances and the workers that I wrote were always more problematical than the server.
You could use an AMQP broker (such as RabbitMQ) and I have an "agent" (e.g. a python process built using pyton-amqplib) that sits on an exchange an intercepts specific messages (specific routing_key); once a timer has elapsed, send back the message on the exchange with a different routing_key.
I realize this means "translating/mapping" routing keys but it works. Working with RabbitMQ and python-amqplib is very straightforward.