Ethereum Genesis file private network - ethereum

Need some basic info on ethereum.
Can I send 1 million transaction in a day in ethereum private network?If yes, how much gas will be required(approx)?
How much of maximum gas limit can we define for a node?
And I have a doubt that if I reinitialize the genesis file then, whether a new blockchain is started or it continues with the older one?

Can I send 1 million transaction in a day in ethereum private network?
Yes, that's around 12 transactions per second, that's no problem.
1000000 / (24 * 60 * 60) = 11.574
If yes, how much gas will be required(approx)?
A transaction without anything else but value transfer costs 21,000 gas.
That is 21 billion gas per 1 million transactions per day, or (assuming a 15 seconds block time) 3.6 million gas per block:
21000000000 / (24 * 60 * 4) = 3645833.333
The default gas limit on Ethereum public network is 4712388 (1.5 * pi million). But it's trivial to increase the target gas limit.
How much of maximum gas limit can we define for a node?
In theory, you should be able to set the gas limit as high as you wish, however, that's not practicable, as discussed in EIP-106 which suggest limiting the maximum block gas limit to 2^63 - 1.
And I have a doubt that if I reinitialize the genesis file then, whether a new blockchain is started or it continues with the older one?
Yes, if you change the genesis, this will in most cases start a new blockchain.

Related

How to distribute new tokens (fee) between token holders

Good morning everyone.
I have a problem, I'm thinking about it but the only solution I found at the moment is very expensive.
Imagine that we have a token, we know all the holders of this token. And now just for having this token, we want to give you even more tokens. For example if your balance today was higher than 0, at the end of the day you have your balance + "X" that the system gives you as a reward for logging the token.
On the other hand "X" is not the same for all holders, sistema earns "Y" every day, then every 24 hours this "Y" has to be distributed among all holders depending on their amount of tokens in the balance, the more tokens you have, the bigger your daily reward will be.
For example:
User 1 has 10 tokens.
User 2 has 90 tokens.
24 hours pass, system has earned 10 tokens.
System distributes these 10 tokens between 2 holders:
User 1 receives 1 token. His balance will be 11 tokens.
User 2 receives 9 tokens. His balance will be 99 tokens.
The only way I see now as a solution is to create a for to go through all the holders, but if we have a lot of holders, the cost of this operation will be huge and it has to be in sections, because if we have 20000 holders we can't go through them in the same for, we will run out of gas.
So what I have thought is to create a function to which we pass the index range of the holders and how many tokens we have to distribute among them. And so call it several times, until everything is distributed. As far as I understand it is the worst solution to the problem. Any other ideas?
If gas cost is the issue. Let users claim their reward rather than you send it? Lots of ICO's use Merkle distributors for this reason. So you would wrap your logic around Merkle proofs and keep updating the Merkle root every 24 hrs.
Here are some resources:
https://collectednotes.com/cibrax/using-merkle-trees-for-bulk-transfers-in-ethereum
https://github.com/Uniswap/merkle-distributor

Performance socket nodejs + mysql

I am developing an application that consist of a server in node.js, basically a socket that is listening for incoming connections.
Data that arrive to my server, come from GPS trackers (30 approximately), that send 5 records per minute each one, so in a minute a will have 5*30 = 150 records, in a hour i will have 150*60 = 9000 records, in a day 9000*24 =216000 and in a month 216000*30 = 6.480.000 million of records.
In addition to the latitude and longitude, i have to store in the database (MySql) the cumulative distance of each tracker. Each tracker send to server positions, and i have to calculate kms between 2 points every time i receive data (to decrease the work to the database when it has millions of records).
So the question is, what is the correct way to sum the kilometers and store it?
I think sum the entire database is not a solution because in millions of records will be very slow. Maybe, every time i have to store a new point (150 times per minute), can I do a select last record in database and then sum the cumulative kilometer with the new distance calc?
2.5 inserts per second is only a modest rate. 6M records/month -- no problem.
How do you compute the KMs? Compute the distance of the previous GPS reading to the current? Or maybe back to the start? Keep in mind that GPS readings can be a bit flaky; a car going in a straight line may look drunk when plotted every 12 seconds. Meanwhile, I will assume you need some kind of index on (sensor, sequence) to keep track of the previous (or first) reading to do the distance.
But, what will you do with the distance? Is it being continually read out for display somewhere? That is, are you updating some non-MySQL thingie 150 times per minute? If so, you have an app that should receive the new GPS reading, store it into MySQL, read the starting point (or remember it), compute the kms and update the graph. That is, MySQL is not the focus here, but your app is.
As for representation of lat/lng, I reference my cheat sheet to see that FLOAT may be optimal.
Kilometers should almost certainly be stored as FLOAT. That gives you about 7 significant digits of precision. You should decide whether to the value represents "meters" or "kilometers". (The precision is the same.)

Google maps api key - limits

Will my API key be blocked for rest of the day when I reach the daily requests limit? I don't want to buy billing plan or get unexpected charges.
https://developers.google.com/maps/faq#usage_exceed
You won't be charged, but your API will return an error message.
It is my impression that as you approach your daily quota limit the system starts giving errors. For example, during the last 4 hours of the day the errors gradually increase with perhaps 100% errors during the last hour.
Most days the total allowed requests has been within my quota. One day it went over, like 2645 versus my 2500 daily quota.
I have tried to spread the error misery evenly around the world by limiting the accesses to 3 per 100 secs. This may be working but I have not seen any errors shown on the graphs as being due to exceeding the 100 secs quotas, which is surprising since I am often exceeding 0.04 requests per sec (5 min average).

How many connections / second before a MySQL database becomes unstable?

I'm developing an app that downloads and uploads an average of 10KB every 60 seconds. All within a single connection / second thanks to my supreme engineering. :)
Each instance of the app opens and closes a new connection every 60 seconds but fortunately the connections are short lived.
How many users before this process becomes unstable? Assuming an average of 4 hours of synchronization (240 connections) per day, per user.
I'm running on an average server that an average of 150$/year hosting plan provides.
I can't find a definite formula. All I know is that 2000 connections per seconds are a lot, yet I'm not sure.
2000 conn / sec = 120,000 conn / minute or 120,000 users.
Am I thinking right? Is there anything I can do to maximize this number?

Computing estimated times of file copies / movements?

Inspired by this xckd cartoon I wondered exactly what is the best mechanism to provide an estimate to the user of a file copy / movement?
The alt tag on xkcd reads as follows:
They could say "the connection is probably lost," but it's more fun to do naive time-averaging to give you hope that if you wait around for 1,163 hours, it will finally finish.
Ignoring the funny, is that really how it's done in Windows? How about other OS? Is there a better way?
Have a look at my answer to a similar question (and the other answers there) on how the remaining time is estimated in Windows Explorer.
In my opinion, there is only one way to get good estimates:
Calculate the exact number of bytes to be copied before you begin the copy process
Recalculate you estimate regularly (every 1, 5 or 10 seconds, YMMV) based on the current transfer speed
The current transfer speed can fluctuate heavily when you are copying on a network, so use an average, for example based on the amount of bytes transfered since your last estimate.
Note that the first point may require quite some work, if you are copying many files. That is probably why the guys from Microsoft decided to go without it. You need to decide yourself if the additional overhead created by that calculation is worth giving your user a better estimate.
I've done something similar to estimate when a queue will be empty, given that items are being dequeued faster than they are being enqueued. I used linear regression over the most recent N readings of (time,queue size).
This gives better results than a naive
(bytes_copied_so_far / elapsed_time) * bytes_left_to_copy
Start a global timer that fires say, every 1000 milliseconds and update a total elpased time counter. Let's call this variable "elapsedTime"
While the file is being copied, update some local variable with the amount already copied. Let's call this variable "totalCopied"
In the timer event that is periodically raised, divide totalCopied by totalElapsed to give the number of bytes copied per timer interval (in this case, 1000ms). Let's call this variable "bytesPerSec"
Divide the total file size by bytesPerSec and obtain the total number of seconds theoretically required to copy this file. Let's call this variable remainingTime
Subtract elapsedTime from remainingTime and you a somewhat accurate calculation for file copy time.
I think dialogs should just admit their limitations. It's not annoying because it's failing to give a useful time estimate, it's annoying because it's authoritatively offering an estimate that's obvious nonsense.
So, estimate however you like, based on current rate or average rate so far, rolling averages discarding outliers, or whatever. Depends on the operation and the typical durations of events which delay it, so you might have different algorithms when you know the file copy involves a network drive. But until your estimate has been fairly consistent for a period of time equal to the lesser of 30 seconds or 10% of the estimated time, display "oh dear, there seems to be some kind of holdup" when it's massively slowed, or just ignore it if it's massively sped up.
For example, dialog messages taken at 1-second intervals when a connection briefly stalls:
remaining: 60 seconds // estimate is 60 seconds
remaining: 59 seconds // estimate is 59 seconds
remaining: delayed [was 59 seconds] // estimate is 12 hours
remaining: delayed [was 59 seconds] // estimate is infinity
remaining: delayed [was 59 seconds] // got data: estimate is 59 seconds
// six seconds later
remaining: 53 seconds // estimate is 53 seconds
Most of all I would never display seconds (only hours and minutes). I think it's really frustrating when you sit there and wait for a minute while the timer jumps between 10 and 20 seconds. And always display real information like: xxx/yyyy MB copied.
I would also include something like this:
if timeLeft > 5h --> Inform user that this might not work properly
if timeLeft > 10h --> Inform user that there might be better ways to move the file
if timeLeft > 24h --> Abort and check for problems
I would also inform the user if the estimated time varies too much
And if it's not too complicated, there should be an auto-check function that checks if the process is still alive and working properly every 1-10 minutes (depending on the application).
speaking about network file copy, the best thing is to calculate file size to be transfered, network response and etc. An approach that i used once was:
Connection speed = Ping and calculate the round trip time for packages with 15 Kbytes.
Get my file size and see, theorically, how many time it would take if i would break it in
15 kb packages using my connection speed.
Recalculate my connection speed after transfer is started and ajust the time that will be spended.
I've been pondering on this one myself. I have a copy routine - via a Windows Explorer style interface - which allows the transfer of selected files from an Android Device, to a PC.
At the start, I know the total size of the file(s) that are to be copied, and as I am using C#.NET, I am using a Stopwatch, to get the elapsed time, and while the copy is in progress, I am keeping a total of what is copied so far, in terms of bytes.
I haven't actually tested it yet, but the best way seems to be this -
estimated = elapsed * ((totalSize - copiedSoFar) / copiedSoFar)
I never saw it the way you guys are explaining it-by trasfeed bytes & total bytes.
The "experience" always made a lot more sense (not good/accurate) if you instead use bytes of each file, and file count. This is how the estimate swings wildly.
If you are transferring large files first, the estimate goes long-even with the connection static. It is like it naively thinks that all files are the average size of those thus transferred, and then makes a guess assuming that the average file size will remain accurate for the entire time.
This, and the other ways, all get worse when the connection 'speed' varies...