The site https://ethstats.net displays statistics about the Ethereum network but I did not find a precise definition of each statistic and graph displayed. Is there somewhere I can get this information?
These are the items of the overview:
Best Block is the heaviest block regarding the cummultative difficulty, or in simple words: the highest block number of the longest valid chain.
Uncles are orphaned blocks, but in oposite to other blockchain systems, uncles are rewarded and included in the blockchain. Shows current bloc's uncle count and uncle count of last 50 blocks.
Last Block shows the time since the last block was mined, usually in seconds.
Average Block Time is, well, the average time between two blocks, excluding uncles, in seconds. Should be something around 15 seconds.
Average Network Hashrate is the number of hashes bruteforced by the network miners to find a new block. 5 TH/s means the network power is at five trillion hashes per second.
Difficulty is the current mining difficulty to find a new block which basicly means how hard it is to find a matching hash.
Active Nodes is the number of connected nodes to the Ethstats dashboard, (not the whole ethereum network!)
Gas Price is the price miners accept for gas. While gas is used to calculate fees. 20 gwei is the current default, which stands for 20 Giga-Wei which are twenty billion wei that is 0.00000002 ETH.
Gas Limit is the block gas limit. It defaults to 1.5 pi million gas (4,712,388) and miner can only include transactions until the gas limit is met (and the block is full). The gas limit is the analogy to bitcoin's block size limit, but not fixed in size.
Page Latency and Uptime are specific stats for the dashboard.
Block Time Chart shows the actual time between the last blocks.
Difficulty Chart shows the actual difficulty of the last blocks.
Block Propagation Chart shows how fast blocks are shared among the nodes connected to the dashboard.
Last Block Miners are the public keys of the miners who found most of the last blocks.
Uncle Count Chart shows numbers of uncles per 25 blocks per bar.
Transactions Chart shows numbers of transactions included in last blocks.
Gas Spending Chart shows how much gas was spent on transactions in each block, note the correlation to the transactions chart.
Gas Limit Chart shows the dynamicly adjusted block gas limit for each block.
And below you see details of connected nodes.
Related
I'm using HardHat with gas-report but I'm not able to understand the following results:
Optimizer enabled: false
Runs: 200
Block limit: 30000000 gas
% of limit
Here I have marked with red square the fields:
enter image description here
Optimizer (whether it's enabled or disabled) and the target amount of contract runs, to which the optimizer should optimize the contract bytecode, are options of the Solidity compiler. When you compile the contract with an optimizer, it can decrease either the total bytecode size - or the amount of gas needed to execute some functions. (docs)
Block limit states the amount of gas units that can fit into one block. Different networks might have different values, some have dynamically adjusted limits, plus you can usually set your own limit if you're using an emulator or a private network. (docs)
% of limit states a portion of how much your contract deployment took in the total block limit. Example from your table: Deployment of HashContract cost 611k gas units, which is approx. 2% of the 30M block limit. If the number exceeds 100%, the transaction would never be included in a block - at least not in a block with the same or smaller limit. Also, if the transaction has a low gasPrice and a high % of the total block limit, some miners/validators might not be able to fit the transaction into a block (as transactions are usually ordered from highest gasPrice to lowest), so it might take longer to be included in a block.
We're running a site for booking salmon fishing licenses. The site has no problem handling the traffic 364 days a year. The 365th day is when the license sale opens, and that's where the problem occurs. The servers are struggling more and more each year due to increased traffic, and we have to further optimize our booking query.
The licenses are divided into many different types (tbl_license_types), and each license type are connected to one or more fishing zones (tbl_zones).
Each license type can have a seasonal quota, which is a single value set as an integer field in tbl_license_types.
Each zone can have a daily quota, a weekly quota and a seasonal quota. The daily quota is the same for all days, and the seasonal quota of course is a single value. Daily and seasonal are therefore integer fields in tbl_zones. The weekly quota however differs by week, therefore those are specified in the separate tbl_weekly_quotas.
Bookings can be for one or more consecutive dates, but are only stated as From_date and To_date in tbl_shopping_cart (and tbl_bookings). For each booking attempt made by a user, the quotas have to be checked against already allowed bookings in both tbl_shopping_cart and tbl_bookings.
To be able to count/group by date, we use a view called view_season_calendar with a single column containing all the dates of the current season.
In the beginning we used a transaction where we first made a query to check the quotas, and if quotas allowed we would use a second query to insert the booking to tbl_bookings.
However that gave a lot of deadlocks under relatively moderate traffic, so we redesigned it to a single query (pseudo-code):
INSERT INTO tbl_bookings (_booking_)
WHERE _lowest_found_quota_ >= _requested_number_of_licenses_
where _lowest_found_quota_ is a ~330 lines long SELECT with multiple subqueries and the related tables being joined multiple times in order to check all quotas.
Example: User wants to book License type A, for zones 5 and 6, from 2020-05-19 to 2020-05-25.
The system needs to
count previous bookings of license type A against the license type A seasonal quota.
count previous bookings in zone 5 for each of the 6 dates against zone 5 daily quota.
same count for zone 6.
count previous bookings in zone 5 for each of the two weeks the dates are part of, against zone 5 weekly quota.
same count for zone 6.
count all previous bookings in zone 5 for the current season against zone 5 seasonal quota.
same count for zone 6.
If all are within quotas, insert the booking.
As I said this was working well earlier, but due to higher traffic load we need to optimize this query further now. I have some thoughts on how to do this;
Using isolation level READ UNCOMMITTED on each booking until quotas for the requested zones/license type are nearly full, then fallback to the default REPEATABLE READ. As long as there's a lot left of the quota, the count doesn't need to be 100% correct. This will greatly reduce lock waits and deadlocks, right?
Creating one or more views which keeps count of all bookings for each date, week, zone and license type, then using those views in the WHERE clauses of the insert.
If doing nr 2, use READ UNCOMMITTED in the views. If views report relevant quota near full, cancel the INSERT and start a new one with the design we're using today. (Hopefully traffic levels are coming down before quotas are becoming full)
I would greatly appreciate thoughts on how the query can be done as efficient as possible.
Rate Per Second = RPS
Suggestions to consider for your AWS Parameters group
innodb_lru_scan_depth=100 # from 1024 to conserve 90% of CPU cycles used for function every second
innodb_flush_neighbors=2 # from 1 to reduce time required to lower innodb_buffer_pool_pages_dirty count when busy
read_rnd_buffer_size=128K # from 512K to reduce handler_read_rnd_next RPS of 3,227
innodb_io_capacity=1900 # from 200 to use more of SSD IOPS
log_slow_verbosity=query_plan,explain # to make slow query log more useful
have_symlink=NO # from YES for some protection from ransomware
You will find these changes will cause transactions to complete processing quicker. For additional assistance, view profile, Network profile for contact info and free downloadable Utility Scripts to assist with performance tuning. On our FAQ page you will find "Q. How can I find JOINS or QUERIES not using indexes?" to assist in reducing select_scan RPhr of 1,173. Com_rollback averages 1 every ~ 2,700 seconds, usually correctable with consistent read order in maintenance queries.
See if you can upgrade the AWS starting a day before the season opens, then downgrade after the rush. It's a small price to pay for what might be a plenty big performance boost.
Rather than the long complex query for counting, decrement some counters as you go. (This may or may not help, so play around with the idea.)
Your web server has some limit on the number of connections it will handle; limit that rather than letting 2K users get into MySQL and stumble over each other. Think of what a grocery store is like when the aisles are so crowded that no one is getting finished!
Be sure to use "transactions", but don't let them be too broad. If they encompass too many things, the traffic will degenerate to single file (and/or transactions will abort with deadlocks).
Do as much as you can outside of transactions -- such as collecting and checking user names/addresses, etc. If you do this after issuing the license, be ready to undo the license if something goes wrong. (This should be done in code, not via ROLLBACK.
(More)
VIEWs are syntactic sugar; they do not provide any performance or isolation. OTOH, if you make "materialized" views, there might be something useful.
A long "History list" is a potential performance problem (especially CPU). This can happen when lots of connections are in the middle of a transaction at the same time -- each needs to hang onto its 'snapshot' of the dataset.
Whereever possible terminate transactions as soon as possible -- even if you turn around and start a new one. An example in Data Warehousing is to do the 'normalization' before starting the main transaction. (Probably this example does not apply to your app.)
Ponder having a background task computing the quotas. The hope is that the regular tasks can run faster by not having the computation inside their transactions.
A technique used in the reservation industry: (And this sounds somewhat like your item 1.) Plow ahead with minimal locking. At the last moment, have the shortest possible transaction to make the reservation and verify that the room (plane seat, etc) is still available.
If the whole task can be split into (1) read some stuff, then (2) write (and reread to verify that the thing is still available), then... If the read step is heavier than the write step, add more Slaves ('replicas') and use them for the read step. Reserve the Master for the write step. Note that Replicas are easy (and cheap) to add and toss for a brief time.
i know what is a gas, gaslimit and gasprice, but still have confusion even after searching and reading through the Internet.
There is a gaslimit per block, but why many blocks did not reach it? in other words, can a miner send a block to the network without reaching the gaslimit for the block?
Assume the block gaslimit is 4 million and i sent a transaction with 4 million gaslimit. But when the miner executed it (used gas was 1 million). Can the miner add extra transactions to the block to fill the remaining 3 million or not. In another way, does a transaction with a big gaslimit (but uses a fraction of that gas) affects the miner of adding more transactions to the block?
Each Opcode coast some value of gas. How Ethereum measure the cost of each EVM opcode? (any reference for explanation?).
Thanks
Q1 The block gas limit is an upper bound on the total cost of transactions that can be included in a block. Yes, the miner can and should send a solved block to the network, even if the gas cost is 0. Blocks are meant to arrive at a steady pace in any case. So "nothing happened during this period" is a valid solution.
Q2a The gas cost of a transaction is the total cost of executing the transaction. Not subject to guesswork. If the actual cost exceeds the supplied gas then the transaction fails with an out-of-gas exception. If there is surplus gas, it's returned to the sender.
Q2b Yes, a miner can and should include multiple transactions in a block. A block is a well-ordered set of transactions that were accepted by the network. It's a unit of disambiguation that clearly defines the accepted order of events. Have a look here for exact meaning of this: https://ethereum.stackexchange.com/questions/13887/is-consensus-necessary-for-ethereum
Q3 I can't say for sure (possibly someone can confirm) that this is an up-to-date list: https://docs.google.com/spreadsheets/d/1m89CVujrQe5LAFJ8-YAUCcNK950dUzMQPMJBxRtGCqs/edit#gid=0
I have some backend servers located in two differend datacenters (in USA and in Europe). These servers are just delivering ads on CPM basis.
Beside that I have big & fat master MySQL server serving advertiser's ad campaign's money balances. Again, all ad campaigns are being delivered on CPM basis.
On every impression served from any of backends I have to decrement ad campaign's money balance according to impression price.
For example, price per one impression is 1 cent. Backend A has delivered 50 impressions and will decrement money balance by 50 cents. Backed B has delivered 30 impressions and it will decrement money balance by 30 cents.
So, main problems as I see are:
Backends are serving about 2-3K impressions every seconds. So, decrementing money balance on fly in MySQL is not a good idea imho.
Backends are located in US and EU datacenters. MySQL master server is located in USA. Network latency could be a problem [EU backend] <-> [US master]
As possible solutions I see:
Using Cassandra as distributed counter storage. I will try to be aware of this solution as long possible.
Reserving part on money by backend. For example, backend A is connecting to master and trying to reserve $1. As $1 is reserved and stored locally on backend (in local Redis for example) there is no problem to decrement it with light speed. Main problem I see is returning money from backend to master server if backend is being disabled from delivery scheme ("disconnected" from balancer). Anyway, it seems to be very nice solution and will allow to stay in current technology stack.
Any suggestions?
UPD: One important addition. It is not so important to deliver ads impressions with high precision. We can deliver more impressions than requested, but never less.
How about instead of decrementing balance, you keep a log of all reported work from each backend, and then calculate balance when you need it by subtracting the sum of all reported work from the campaign's account?
Tables:
campaign (campaign_id, budget, ...)
impressions (campaign_id, backend_id, count, ...)
Report work:
INSERT INTO impressions VALUES ($campaign_id, $backend_id, $served_impressions);
Calculate balance of a campaign only when necessary:
SELECT campaign.budget - impressions.count * $impression_price AS balance
FROM campaign INNER JOIN impressions USING (campaign_id);
This is perhaps the most classical ad-serving/impression-counting problem out there. You're basically trying to balance a few goals:
Not under-serving ad inventory, thus not making as much money as you could.
Not over-serving ad inventory, thus serving for free since you can't charge the customer for your mistake.
Not serving the impressions too quickly, because usually customers want an ad to run through a given calendar time period, and serving them all in an hour between 2-3 AM makes those customers unhappy and doesn't do them any good.
This is tricky because you don't necessarily know how many impressions will be available for a given spot (since it depends on traffic), and it gets even more tricky if you do CPC instead of CPM, since you then introduce another unknowable variable of click-through rate.
There isn't a single "right" pattern for this, but what I have seen to be successful through my years of consulting is:
Treat the backend database as your authoritative store. Partition it by customer as necessary to support your goals for scalability and fault tolerance (limiting possible outages to a fraction of customers). The database knows that you have an ad insertion order for e.g. 1000 impressions over the course of 7 days. It is periodically updated (minutes to hours) to reflect the remaining inventory and some basic stats to bootstrap the cache in case of cache loss, such as actual
Don't bother with money balances at the ad server level. Deal with impression counts, rates, and targets only. Settle that to money balances after the fact through logging and offline processing.
Serve ad inventory from a very lightweight and fast cache (near the web servers) which caches the impression remaining count and target serving velocity of an insertion order, and calculates the actual serving velocity.
Log all served impressions with relevant data.
Periodically collect serving velocities and push them back to the database.
Periodically collect logs and calculate actual served inventory and push it back to the database. (You may need to recalculate from logs due to outages, DoSes, spam, etc.)
Create a service on your big & fat master MySQL server serving advertiser's ad campaign's money balances.
This service must implement a getCampaignFund(idcampaign, requestingServerId, currentLocalAccountBalanceAtTheRequestingServer) that returns a creditLimit to the regional server.
Imagine a credit card mechanism. Your master server will give some limit to your regional servers. Once this limit is decreasing, a threshold trigger this request to get a new limit. But to get the new credit limit the regional server must inform how much it had used from the previous limit.
Your regional servers might implement additionally these services:
currentLocalCampaignAccountBalance
getCampaignAccountBalance(idcampaign): to inform the current usage of a specific campaign, so the main server might update all campaigns at a specific time.
addCampaign(idcampaign, initialBalance): to register a new campaign
and it's start credit limit.
supendCampaign(idcampaign): to suspend the impressions to a
campaign.
resumeCampaign(idcampaign): to resume impression to a campaign.
currentLocalCampaignAccountBalance finishCampaign(idcampaign): to
finish a campaign and return the current local account balance.
currentLocalCampaignAccountBalance
updateCampaignLimit(idcampaign, newCampaignLimit): to update the limit
(realocation of credit between regional servers). This service will
update the campaign credit limit and return the account balance of
the previous credit limit acquired.
Services are great so you have a loosely coupled architecture. Even if your main server goes offline for some time, your regional servers will keep running until they have not finished their credit limits.
this may not be a detailed canonical answer but i'll offer my thoughts as possible [and at least partial] solutions.
i'll have to guess a bit here because the question doesn't say much about what measurements have been taken to identify mysql bottlenecks, which imho is the place to start. i say that because imho 1-2k transactions per second is not out of range for mysql. i've easily supported volumes this high [and much higher] with some combination of the following techniques, in no particular order here because it depends on what measurements tell me are the bottlenecks: 0-database redesign; 1-tuning buffers; 2-adding ram; 3-solid state drives; 4-sharding; 5-upgrading to mysql 5.6+ if on 5.5 or lower. so i'd take some measurements and apply the foregoing as called for by the results of the measurements.
hope this helps.
I assume
Ads are probably bought in batches of at least a couple of thousands
There are ads from several different batches being delivered at the same time, not all of which be near empty at the same time
It is OK to serve some extra ads if your infrastructure is down.
So, here's how I would do it.
The BigFat backend has these methods
getCurrentBatches() that will deliver a list of batches that can be used for a while. Each batch contains a rate with the number of ads that can be served each second. Each batch also contains a serveMax; how many ads might be served before talking to BigFat again.
deductAndGetNextRateAndMax(batchId, adsServed) that will deduct the number of ads served since last call and return a new rate (that might be the same) and a new serveMax.
The reason to have a rate for each batch is that when one batch is starting to run out of funds it will be served less until it's totally depleted.
If one backend doesn't connect to BigFat for a while it will reach serveMax and only serve ads from other batches.
The backends could have a report period of seconds, minutes or even hours depending on serveMax. A brand new batch with millions of impressions left can run safely for a long while before reporting back.
When BigFat gets a call to deductAndGetNextRateAndMax it deducts the number of served ads and then returns something like 75% of the total remaining impressions up to a configured max. This means that at the end of batch, if it isn't refilled, there will be some ads delivered after the batch is empty but it's better that the batch is actually depleted rather than almost depleted for a long time.
I have a engine to check the H264 video is compliance with AVCHD or BDMV spec, the SPEC mentions the MAX system data rate is up to 24 Mbit/s, I want to know how to calculate the system data rate? Does it mean the average of whole file? Or does it mean the average of 1 second?
The maximum specifies that you that you will never exceed 24Mbps so you will never send more than one bit in any 42nS (approximately) period. You can scale that to any time frame you want by simple multiplication to the point when you will never burst beyond 24M bits in one second (and you will still never send more than one bit in any of the 24M 42nS periods that make up that second).
When you calculate an average for any time period, it MUST be below the specified maximum burst, but is simply considered an average. Those of us in the CATV industry spend a lot of time trying to make the transmission system behave as if the average rate is a constant rate, because if you have a certain throughput (in bits) for video, you don't want to waste any of it. We "rate shape" the video as well as using adaptive buffering in the digital set-top boxes that receive the signal.
A single QAM256 channel on the U.S. broadband cable system will support 40Mbps and usually between 10 and 12 normal definition signals with an average bit rate of approximately 4Mbps. These channels will burst to 9Mbps when there is a lot of change in the picture of an unpredictable nature. As you can imagine, a boxing match (with a lot of movement) takes significantly more bandwidth, than a network news anchor reading from their desk, so we also try to match channels to fill this available bandwidth.
Typically, we can only fit 3 high-definition channels in the same 40Mbps channel and these have an average bit rate of about 12.5Mbps and as you've noted above, are limited to 24Mbps.
Hope this helps!