How can I determine my EOS RAM byte-by-byte? - eos

I want to review my EOS token RAM byte-by-byte.
How can I see this?
I am specifically trying to determine what gets allocated on account creation.

Related

Solidity/Ethereum. Cheaper alternative regarding gas

I am learning solidity/ethereum and I came across this situation:
I have a mapping(address => unit) that keeps track of how much every address is paying my contract, and at some point, I have to compute how much % of the total pool has one user contributed with. (for example, if the total pool is 100 ethers and user contributed 10 ethers, he has contributed with 10% of the total pool).
In order to do so, I need to have access to the total pool. My first instinct was to have a variable totalPool which will keep track of the total value, therefore every time an address is paying the contract, totalPool += msg.value; However, while learning about the EVM, I kept reading how expensive it is to operate on the storage.
My question is, what is cheaper in terms of gas, to keep track of the total pool and operate on memory every time an address pays the contract, or to compute the total pool every time when I need to find out the ratio contribution?
From what I understand of your use case, your first instinct is probably the simplest and the best solution unless you have an easy way to compute the total pool. You have to keep in mind that in solidity, it is impossible to loop over the elements of a mapping to sum them up. So unless it is possible to calculate the size of your pool using other variables that would be stored anyway, the total pool variable is most likely the best way to keep track of the pool size.
I highly recommend that you test as many implementations as you can come up with. Both the ethers.js and web3.js libraries have functions that allows you to test how much gas should be required to execute a transaction.

Queued Message Handler VIs in parent SubVI which execution type is set to be as preallocated (?)

I am creating an sample of a communication server through LabVIEW.
In the main VI I have a server and clients: Execution of the last is set as preallocated clone reentrant. I use Queued Message Handler to transfer messages and commands between server and clients.
The picture below is the client VI (preallocated clone reentrant execution) with highlighted Queued Message SubVIs. In my previous question I asked about execution type of SubVIs in the Client VIs (preallocated) and got answer that SubVIs need to be preallocated too. But now my question is about Queued Message handler template VIs. Should I set execution type of Queued Message handler template VIs the same as for parent VI?
Thank you
The Queued Message Handler VIs seem to me to have appropriate reentrancy settings out of the box.
For example Enqueue Message, which should always execute quickly, is non-reentrant but Dequeue Message, which waits for a message if there isn't one already in the queue, is preallocated clone reentrant.
It's good that you're thinking about this, as timing bugs can be a lot harder to trace than simple data-value bugs, but for most purposes I think you can trust the designers of the framework to have chosen correctly.
If you're really not satisfied by this and are still worried that an incorrect reentrancy setting might be causing you trouble, it won't really hurt to change all these VIs to preallocated clone reentrant. Unless you are using these VIs to pass some huge data structure around, the extra memory consumed by the preallocated clones should be small.

What's the free bandwidth given with Google compute engine instances

I'm unable to understand the free bandwidth/traffic allowed in per Google Compute engine instance. I'm using digitalocean and here with every server it provides free bandwidth/transfer e.g with $ 0.015- 1GB/1CPU and 2TB of Transfer is allowed.
Hence is there any free bandwidth per compute instance or google will charge for every bit transferred to/from VM.
As documented on our Network Pricing page, traffic prices depend on the source and destination. There is no "bucket of bits up til x GB" that are free like a cellphone plan or something. Rather certain types of traffic are always free, and other types are charged. For example, anything coming in from the internet. Or, anything to another VM in the same zone (using internal IPs).
If you are in Free Trial, then of course we give you usage credits, so you can use up to that total amount, in dollars, "for free."

Metro App BackgroundTask TimeTrigger/MaintenanceTrigger Usage

I read an article on BackgroundTasks: TimeTrigger and MaintenaceTrigger.
Here they demonstrate how these triggers can be used to download email. I'm trying to understand the practicality and appropriateness of this approach.
Quotas for BackgroundTasks on LockScreen are 2 seconds CPU time and non-LockScreen is 1 second CPU time.
Given this restriction, how is it possible that one can download emails in this amount of time? Surely, just establishing a connection to the remote server will take more time than that?
Am i misunderstanding something about how BackgroundTasks work or is this article inaccurate?
http://blogs.msdn.com/b/windowsappdev/archive/2012/05/24/being-productive-in-the-background-background-tasks.aspx
CPU Time is not the same as the amount of seconds that have passed. Your link references a Word Document, Introduction to Background Tasks, which contains the following:
CPU usage time refers to the amount of CPU time used by the app and not the wall clock time of the background task. For example, if the background task is waiting in its code for the remote server to respond, and it is not actually using the CPU, then the wait time is not counted against the CPU quota because the background task is not using the CPU.
If you are establishing a connection to the mail server (and waiting for it to respond), then you are not using any CPU. This means the time that you spent waiting is not counted against you.
Of course, you will want to test your background task to make sure that it stays within the limits.

measuring performance - using real clicks vs "ab" command

I have a web site in closed beta, developed in Django, runs with Mysql on Debian.
In the last few days, the main page has been showing a slowdown. For every ten clicks, one or two receives extremely slow response (10 secs or more), others are as fast as they used to be.
When I was searching for the problem, I ran into this issue that I couldn't grasp:
top command shows that when I request the main page, mysql shoots up to 90% - 100% cpu usage. I get the page just as the cpu use gets back to normal. So, I thought, it is db.
Then I called ab with parameters -n 1000 -c 5, I got decent performance, about 100 pages per second, just as it was before the slowdown. I would imagine a worse performance as 10-20% of requests take 10 secs to load.
Is this conflict between ab and "real" clicks normal, or am I using ab in a wrong configuration?
ab doesn't execute many parts of the page (javascript i.e.) so you'll notice probably a sensible difference in pressure to the webserver.