What kind of latency is expected of contract calls? - ethereum

I'm developing a dapp and got it working well using web3 and testrpc.
My frontend is currently pretty "chatty" with contract calls (constant methods) and everything works super fast.
I was wondering what kind of latency I should expect in the real network for simple calls? do I need to aggresively optimize my contract reads?

It depends. If your dApp is running on a node (and is fully synced), then constant functions will execute similar to what you're seeing in your testing. If not, then all bets are off. Your latency will depend on the provider you're connecting to.
My best advice is once you finish development, deploy to testnet and run performance tests. Chances are if you're not running a fully synced local node, and your app is as chatty as you say, then you may be disappointed with the results. You would want to look into optimizing your reads, moving some state data out of the contract (is possible), or turning your client into a light node.

Related

How many EVM in Ethereum chain?

I am really confused for now. I am working on Solidity DEV, but today, I try to think of one question, how many EVM is in the Ethereum chain?
I am not joking. I really want to know, when to create the EVM. I have read the doc https://ethereum.org/en/developers/docs/evm/. But still not clear about that question. So, I mean, whether we only have one EVM in the chain or each validation node(RPC node) has one EVM or when the metamask tries to make a transaction with the RPC node, the RPC node creates an EVM and loads the target smart contract or each metamask is an EVM. I am really confused now. Please help me, if you know the sure answer. Really really thanks.
EVM is smart contract runtime. Each ethereum node has Ethereum software and inside the software, it is running a virtual computer. EVM is a Turing-complete machine and since it runs on different nodes and you have no access to it, it is called a "virtual machine". It is a kinda cloud computing machine. the only thing this machine can do is execute smart contracts.
Metamask is just a wallet, a middleman. It passes your requests to Ethereum blockchain.

Guidance as to when querying the database for read operation should be done using mass transit request/response for timebound operation

For Create operations it is clear that putting the message in the queue is a good idea in case the processing or creation of that entity takes longer than expected and other the other benefits queues bring.
However, for read operations that are timebound (must return to the UI in less than 3 seconds) it is not entirely clear if a queue is a good idea.
http://masstransit-project.com/MassTransit/usage/request-response.html provides a nice abstraction but it goes through the queue.
Can someone provide some suggestions as to why or why not I would use mass transit or that effect any technology like nservicebus etc for database read operation that are UI timebound?
Should I only use mass transit only for long running processes?
Request/Reply is a perfectly valid pattern for timebound operations. Transport costs in case of, for example, RabbitMQ, are very low. I measured performance of request/response using ServiceStack (which is very fast) and MassTransit. There is an initial delay with MassTransit to cache the endpoints, but apart from that the speed is pretty much the same.
Benefits here are:
Retries
Fine tuning of timeouts
Easy scaling with competing consumers
just to name the most obvious ones.
And with error handling you get your requests ending up in the error queue so there is no data loss and you can always look there to find out what and why went wrong.
Update: There is a SOA pattern that describes this (or rather similar) approach. It is called Decoupled Invocation.

Is hosting my multiplayer HTML5 game on a free heroku dyno hurting my network performance?

I've recently built a multiplayer game in HTML5 using the TCP-based WebSockets protocol for the networking. I already have taken steps in my code to minimize lag (using interpolation, minimizing the number of messages sent/message size), but I occasionally run into issues with lag and choppiness that I believe are happening due to a combination of packet loss and TCP's policy of in-order delivery.
To elaborate - my game sends out frequent websocket messages to players to update them on the position of the enemy players. If a packet gets dropped/delayed, my understanding is that it will prevent later packets from being received in a timely manner, which causes the enemy players to appear frozen in the same spot and then zoom to the correct location once the delayed packet is finally received.
I confess that my understanding of networking/bandwidth/congestion is quite weak. I've been wondering whether running my game on a single free heroku dyno, which is basically a VM on another virtual server (heroku dynos are on EC2 instances) could be exacerbating this problem. Do heroku dynos and multi-tenant servers in general tend to have worse network congestion due to noisy neighbors or other reasons?
Yes. You don't get dedicated networking performance from Heroku instances. Some classes of EC2 instances in a VPC can have "Enhanced Networking" enabled which is supposed to help give you dedicated performance.
Ultimately, though the best thing to do before jumping to a new solution is benchmarking. Benchmark what level of throughput you can get from a Heroku dyno then try benchmarking an Amazon instance to see what kind of difference it makes.

Architecture advice for EventMachine and MySQL

We are writing a real-time game in EventMachine/Ruby. We're using ActiveRecord with MySQL for storing the game objects.
When we start the server we plan to load all the game objects into memory. This will allow for us to avoid any blocking/slow SQL queries with ActiveRecord.
However, we still need to persist the data in the database in case the server crashes, of course.
What are our options for doing so? I could use EM.Defer but I have no idea how many concurrent players that could handle since the thread pool is limited to 20.
Currently I'm thinking using Resque with Redis would be the best bet. Do everything with the objects in memory, and whenever there is a save that needs to occur for the database, fire off a job and add it to the Resque queue.
Any advice?
Threadpool size can be tweaked - see EventMachine.threadpool_size
Each server process (apache...) will spawn its own EventMachine reactor and its own EM.defer threadpool, so if you use a forking server (a mongrel farm, passenger, ...) you don't need to go crazy on the threadpool size
See EM-Synchrony by Ilja Grigorik (https://github.com/igrigorik/em-synchrony) - you should be able to simplify your code with it
Afaik, mysql has a non-blocking driver that you can use freely with EM, EM::Synchrony supports it http://www.igvita.com/2010/04/15/non-blocking-activerecord-rails/ - this would mean you don't need EM.defer at all!
Take a look at Thin - https://github.com/macournoyer/thin/ - it's non-blocking EM-based webserver that supports Rails
Having said all this, writing evented code is a bitch - forget about stack traces and make sure you're running benchmark tests often as anything blocking your reactor will block the entire application.
Also, this all applies to MRI Ruby ONLY. If you mean to use jruby... You're bound to get into trouble as thread-safety of eventmachine seems to be largely due to GIL of MRI Ruby and standard patterns don't work (many aspects of it can be made to work with this fork https://github.com/WebtehHR/eventmachine/tree/v1.0.3_w_fix which fixes some issues EM has with JRuby)
Unfortunately, guys from https://github.com/eventmachine/eventmachine are kind of not very active, the project currently has 200+ issues and almost 60 open pull requests which is why I've had to use a separate fork to continue playing with my current project - this still means EM is an awesome project just don't expect problems you encounter to be quickly fixed so do your best to not go out of the trodden path of EM use.
Another problem with JRuby is that EM::Synchrony imposes a heavy performance penalty because JRuby doesn't have implemented fibers as of 1.7.8 but rather maps them to Java native threads which are MUCH slower
Also, have you considered messaging with something like RabbitMQ (it has a synchronous https://github.com/ruby-amqp/bunny, and evented driver https://github.com/ruby-amqp/amqp) as a possibility to communicate game objects between clients and perhaps reduce overhead on the database / distributed memory store that you had in mind?
Redis/Resque seem good, but if all the jobs will need to do is simple persistance, and if there will be A LOT of such calls, you might want to consider beanstalkd - it has A LOT faster but simpler queue then Resque and you can probably make this even faster if you don't really need activerecord to dump attribute hashes into the database, see delayed_jobs vs resque vs beanstalkd?
A couple years and a failed project later, some thoughts:
avoid eventmachine if any way possible, there's a plethora of opportunities to peg your CPU nowadays with YARV/MRI Ruby on a IO constrained application and without wasting memory.
My favorite approach for a web application at this time is use Puma with multiple processes and threads.
Have in mind that GIL in YARV only affects the Ruby interpreter code, not the IO operations, meaning that on a IO constrained application you can add threads and see better utilization of a single core,
add more processes and you see better utilization of many cores :) On Heroku 1x worker we run 2 processes with 4 threads each and this pegs our CPU potential to the top in benchmark meaning the application is no longer IO bound, but CPU bound and doing so without unacceptable memory losses.
When we needed super-fast responses we were troubled by the DB write operation times which did not affect the response to client, so we did asynchronous database writes using sidekiq / resque,
In hindsight you could even do celluloid or concurrent-ruby for asynchronous IO reads/writes (think DB writes, cache visits etc), it's less overhead and infrastructure but harder to debug and problem solve in production - my worst nightmare being an async operation failing silently with no error trace in our Errors console (an exception in exception handling for example)
End result is that your application experiences the same sort of benefits you used to get from using eventmachine (elimination of the IO bound, full utilization of CPU without huge memory footprint, parallel non-blocking IO) without resorting to writing reactor code which is a complete bitch to do as explained in my 2013 post

What happens during Stand-By and Hibernation?

It just hit me the other day. What actually happens when I tell the computer to go into Stand-By or to Hibernate?
More spesifically, what implications, if any, does it have on code that is running? For example if an application is compressing some files, encoding video files, checking email, running a database query, generating reports or just processing lots of data or doing complicated math stuff. What happens? Can you end up with a bug in your video? Can the database query fail? Can data processing end up containing errors?
I'm asking this both out of general curiosity, but also because I started to wonder if this is something I should think about when I program myself.
You should remember that the OS (scheduler) freezes your program about a gazillion times each second. This means that your program can already function pretty well when the operating system freezes it. There isn't much difference, from your point of view, between stand-by, hibernate and context switching.
What is different is that you'll be frozen for a long time. And this is the only thing you need to think about. In most cases, this shouldn't be a problem.
If you have a network connection you'll probably need to re-establish it, and similar issues. But this just means checking for errors in all IO operations, which I'm sure you're already doing... :-)
My initial thought is that as long as your program and its eco-system is contained within the pc that is going on stand - by or hibernation, then, upon resume your program should not be affected.
However, if you are say updating a record in some database hosted on a separate machine then hibernation / stand - by will be treated as a timeout.
If your program is dependent on such a change in "power status" you can listen to WM_POWERBROADCAST Message as mentioned on msdn
Stand-By keeps your "state" alive by keeping it in RAM. As a consequence if you lose power you'll lose your stored "state".
But it makes it quicker to achieve.
Hibernation stores your "state" in virtual RAM on the hard disk, so if you lose power you can still come back three days later. But it's slower.
I guess a limitation with Stand-By is how much RAM you've got, but I'm sure virtual RAM must be employed by Stand-By when it runs out of standard RAM. I'll look that up though and get back!
The Wikipedia article on ACPI contains the details about the different power savings modes which are present in modern PCs.
Here's the basic idea, from how I understand things:
The basic idea is to keep the current state of the system persisted, so when the machine is brought back into operation, it can resume at the state it was before the machine was put into sleep/standby/hibernation, etc. Think of it as serialization for your PC.
In standby, the computer will keep feeding power to the RAM, as the main memory is volatile memory that needs constant refreshing to hold on to its state. This means that the hard drives, CPU, and other components can be turned off, as long as there is enough power to keep the DRAM refreshed to keep its contents from disappearing.
In hibernation, the main memory will also be turned off, so the contents must be copied to permanent storage, such as a hard drive, before the system power is turned off. Other than that, the basic premise of hiberation is no different from standby -- to store the current state of the machine to restore at a later time.
With that in mind, it's probably not too likely that going into standby or hibernate will cause problems with tasks that are executing at the moment. However, it may not be a good idea to allow network activity to stop in the middle of execution, as depending on the protocol, your network connection could timeout and be unable to resume upon returning the system to its running state.
Also, there may be some machines that just have flaky power-savings drivers which may cause it to go to standby and never come back, but that's completely a different issue.
There are some implications for your code. Hibernation is more than just a context switch from the scheduler. Network connections will be closed, network drives or removable media might be disconnected during the hibernation, ...
I dont think your application can be notified of hibernation (but I might be wrong). What you should do is handle error scenarios (loss of network connectivity for example) as gracefully as possible. And note that those error scenario can occur during normal operation as well, not only when going into hibernation ...