ICO - in between a low gas transaction gets in, how? - ethereum

I am analyzing an ICO, successful transactions start here: https://etherscan.io/txs?a=0x6267b5376c809445c9432bd9f14a3808b00eae2c&p=134
If you see the last column - most successful transactions paid a very high price (>0.1 ETH) but there are some in between which paid lil (https://etherscan.io/tx/0x5b9145d94449fe01b7bcecee162e3adffd389997ba27a5c8724b632ca455b61c)
Question is -
How are these transactions able to get in between high price transactions? Is it just chance?
Is there some kind of strategy possible to make sure your transaction gets picked up - like if you are running a node?

Comparing the transactions for this contract to each other, yes, there is a big difference in tx costs. But, if you look specifically at the gas price, the buyers are paying a very big gas price across the spectrum. The high end transactions are paying 1k+ Gwei (some are even higher than 3k Gwei), but even the "cheapest" transactions you're looking at are still paying ~100 Gwei. Compared to other transactions on the blockchain, that's a high cost. The cost to have a transaction mined as fast as possible varies depending on congestion, but whenever I check ethgasstation.info, the high end gas prices are usually around 20-40 Gwei. As you can imagine, anything higher than that, miners are going to be eager to consume ASAP.
For your 2nd question, this is exactly the best strategy to have your transaction picked up the fastest. Pay a higher gas price.

Related

Should you scale through tables or computation in Mysql?

I have a project with customers buying a product with platform based tokens. I have a mysql table that tracks a customer buying x amount and one tracking customer consumption(-x amount). In order to display their Amount of tokens they have left on the platform and query funds left on spending I wanted to query (buys - comsumed). But I remembered that people alsways talk about space is cheaper than computation(Not just $ but querytime as well). Should I have a seperate table for querying amount that gets updated with each buy or consume ?
So far I have always tried to use the least amount of tables to make it simple and have easy oversight, but I start to question if that is right...
There is no right answer, keep in mind the goal of the application, and updates in software likely to happen.
If you keep in these 2 tables transactions the user may have, then the new column was necessary, cause you had to sum the columns. If one row is for one user (likely your case), then 90% you should use those 2 tables only.
I would suggest you not have that extra column. As far with my expierence, in that kind of situations has the down of the bigger the project becomes, the more difficult is for you and the other developers, to have in mind to update the new column, because is dependent variable.
Also, when the user buy products or consumption tokens, you will have to update the new token, so energy and time loss as well.
You can store the (buys - consumed) in session, and update when is needed(if real time update is not necessary, not multiple devices).
If you need continuous update, so multiple queries over time, then memory loss over energy-time loss is greater, so you should have that 3 table - column.

MySQL query design for booking database with limited amount of goods and heavy traffic

We're running a site for booking salmon fishing licenses. The site has no problem handling the traffic 364 days a year. The 365th day is when the license sale opens, and that's where the problem occurs. The servers are struggling more and more each year due to increased traffic, and we have to further optimize our booking query.
The licenses are divided into many different types (tbl_license_types), and each license type are connected to one or more fishing zones (tbl_zones).
Each license type can have a seasonal quota, which is a single value set as an integer field in tbl_license_types.
Each zone can have a daily quota, a weekly quota and a seasonal quota. The daily quota is the same for all days, and the seasonal quota of course is a single value. Daily and seasonal are therefore integer fields in tbl_zones. The weekly quota however differs by week, therefore those are specified in the separate tbl_weekly_quotas.
Bookings can be for one or more consecutive dates, but are only stated as From_date and To_date in tbl_shopping_cart (and tbl_bookings). For each booking attempt made by a user, the quotas have to be checked against already allowed bookings in both tbl_shopping_cart and tbl_bookings.
To be able to count/group by date, we use a view called view_season_calendar with a single column containing all the dates of the current season.
In the beginning we used a transaction where we first made a query to check the quotas, and if quotas allowed we would use a second query to insert the booking to tbl_bookings.
However that gave a lot of deadlocks under relatively moderate traffic, so we redesigned it to a single query (pseudo-code):
INSERT INTO tbl_bookings (_booking_)
WHERE _lowest_found_quota_ >= _requested_number_of_licenses_
where _lowest_found_quota_ is a ~330 lines long SELECT with multiple subqueries and the related tables being joined multiple times in order to check all quotas.
Example: User wants to book License type A, for zones 5 and 6, from 2020-05-19 to 2020-05-25.
The system needs to
count previous bookings of license type A against the license type A seasonal quota.
count previous bookings in zone 5 for each of the 6 dates against zone 5 daily quota.
same count for zone 6.
count previous bookings in zone 5 for each of the two weeks the dates are part of, against zone 5 weekly quota.
same count for zone 6.
count all previous bookings in zone 5 for the current season against zone 5 seasonal quota.
same count for zone 6.
If all are within quotas, insert the booking.
As I said this was working well earlier, but due to higher traffic load we need to optimize this query further now. I have some thoughts on how to do this;
Using isolation level READ UNCOMMITTED on each booking until quotas for the requested zones/license type are nearly full, then fallback to the default REPEATABLE READ. As long as there's a lot left of the quota, the count doesn't need to be 100% correct. This will greatly reduce lock waits and deadlocks, right?
Creating one or more views which keeps count of all bookings for each date, week, zone and license type, then using those views in the WHERE clauses of the insert.
If doing nr 2, use READ UNCOMMITTED in the views. If views report relevant quota near full, cancel the INSERT and start a new one with the design we're using today. (Hopefully traffic levels are coming down before quotas are becoming full)
I would greatly appreciate thoughts on how the query can be done as efficient as possible.
Rate Per Second = RPS
Suggestions to consider for your AWS Parameters group
innodb_lru_scan_depth=100 # from 1024 to conserve 90% of CPU cycles used for function every second
innodb_flush_neighbors=2 # from 1 to reduce time required to lower innodb_buffer_pool_pages_dirty count when busy
read_rnd_buffer_size=128K # from 512K to reduce handler_read_rnd_next RPS of 3,227
innodb_io_capacity=1900 # from 200 to use more of SSD IOPS
log_slow_verbosity=query_plan,explain # to make slow query log more useful
have_symlink=NO # from YES for some protection from ransomware
You will find these changes will cause transactions to complete processing quicker. For additional assistance, view profile, Network profile for contact info and free downloadable Utility Scripts to assist with performance tuning. On our FAQ page you will find "Q. How can I find JOINS or QUERIES not using indexes?" to assist in reducing select_scan RPhr of 1,173. Com_rollback averages 1 every ~ 2,700 seconds, usually correctable with consistent read order in maintenance queries.
See if you can upgrade the AWS starting a day before the season opens, then downgrade after the rush. It's a small price to pay for what might be a plenty big performance boost.
Rather than the long complex query for counting, decrement some counters as you go. (This may or may not help, so play around with the idea.)
Your web server has some limit on the number of connections it will handle; limit that rather than letting 2K users get into MySQL and stumble over each other. Think of what a grocery store is like when the aisles are so crowded that no one is getting finished!
Be sure to use "transactions", but don't let them be too broad. If they encompass too many things, the traffic will degenerate to single file (and/or transactions will abort with deadlocks).
Do as much as you can outside of transactions -- such as collecting and checking user names/addresses, etc. If you do this after issuing the license, be ready to undo the license if something goes wrong. (This should be done in code, not via ROLLBACK.
(More)
VIEWs are syntactic sugar; they do not provide any performance or isolation. OTOH, if you make "materialized" views, there might be something useful.
A long "History list" is a potential performance problem (especially CPU). This can happen when lots of connections are in the middle of a transaction at the same time -- each needs to hang onto its 'snapshot' of the dataset.
Whereever possible terminate transactions as soon as possible -- even if you turn around and start a new one. An example in Data Warehousing is to do the 'normalization' before starting the main transaction. (Probably this example does not apply to your app.)
Ponder having a background task computing the quotas. The hope is that the regular tasks can run faster by not having the computation inside their transactions.
A technique used in the reservation industry: (And this sounds somewhat like your item 1.) Plow ahead with minimal locking. At the last moment, have the shortest possible transaction to make the reservation and verify that the room (plane seat, etc) is still available.
If the whole task can be split into (1) read some stuff, then (2) write (and reread to verify that the thing is still available), then... If the read step is heavier than the write step, add more Slaves ('replicas') and use them for the read step. Reserve the Master for the write step. Note that Replicas are easy (and cheap) to add and toss for a brief time.

Derived vs Stored account balance in high rate transactions system

I'm writing a Spring Boot 2.x application using Mysql as DBMS. I use Spring Data and Hibernate.
I want to realize a SMS gateway for my customers. Each customer has an account in my system and a balance.
For each sms sent, the balance of the customer must be subctracted by the sms cost. Furthemore, before send the sms the balance should be checked in order to see if the customer has enough credit (this imply having an updated balance to check).
I want to handle a high rate of sms because customers are business and not just final users.
Each customer therefore could send hundreds sms in really short time. I'm looking for an efficient way to update customer's balance. Each transaction has a little price but I've a lot of them.
I could derive the balance making a SELECT SUM(deposit-costs) FROM... but this would be very expensive to do as soon I've milions of records in my system.
On the other hand, if I keep the value of the balance in a column, I would have two problems:
concurrency problem: I could have many transactions at the same time that want to update the balance. I could use pessimistic lock but I would slow down the entire system
correctness of the data: The balance could be wrong due to some wrong/miss update
I could mitigate these points running a task at the end of the day to fix the stored balance with value of the derived one, but:
if I've hundreds of customers it could stuck my system for some time
some heedful customer could notice the variation of his balance and could ask for explanation. It's not nice that your balance change without explanation when you are not doing anything
I'm looking for some advice and best practice to follow. In the end several big companies are selling their service "pay as you go", so I guess there is a common way to handle the problem.
In banking, people are quite careful about money. Generally, the "place for truth" is the database. You can make the "place for truth" memory, but this is more sophisticated requiring concurrent in memory databases. What if one of your servers goes down in the middle of a transaction? You need to be able to quickly failover the database to a backup.
Do a benchmark to see if database updates times meet your needs. There are various ways to speed them up moderately. If these rates are in your acceptable range, then do it this way. It is the simplest.
A common approach to speed up txn times is to have a threadpool and assign one thread to an account. This way all txns on an account are always handled by the same thread. This allows further optimization.

Decrementing money balance stored on master server from numerous backends? (distributed counter, eh?)

I have some backend servers located in two differend datacenters (in USA and in Europe). These servers are just delivering ads on CPM basis.
Beside that I have big & fat master MySQL server serving advertiser's ad campaign's money balances. Again, all ad campaigns are being delivered on CPM basis.
On every impression served from any of backends I have to decrement ad campaign's money balance according to impression price.
For example, price per one impression is 1 cent. Backend A has delivered 50 impressions and will decrement money balance by 50 cents. Backed B has delivered 30 impressions and it will decrement money balance by 30 cents.
So, main problems as I see are:
Backends are serving about 2-3K impressions every seconds. So, decrementing money balance on fly in MySQL is not a good idea imho.
Backends are located in US and EU datacenters. MySQL master server is located in USA. Network latency could be a problem [EU backend] <-> [US master]
As possible solutions I see:
Using Cassandra as distributed counter storage. I will try to be aware of this solution as long possible.
Reserving part on money by backend. For example, backend A is connecting to master and trying to reserve $1. As $1 is reserved and stored locally on backend (in local Redis for example) there is no problem to decrement it with light speed. Main problem I see is returning money from backend to master server if backend is being disabled from delivery scheme ("disconnected" from balancer). Anyway, it seems to be very nice solution and will allow to stay in current technology stack.
Any suggestions?
UPD: One important addition. It is not so important to deliver ads impressions with high precision. We can deliver more impressions than requested, but never less.
How about instead of decrementing balance, you keep a log of all reported work from each backend, and then calculate balance when you need it by subtracting the sum of all reported work from the campaign's account?
Tables:
campaign (campaign_id, budget, ...)
impressions (campaign_id, backend_id, count, ...)
Report work:
INSERT INTO impressions VALUES ($campaign_id, $backend_id, $served_impressions);
Calculate balance of a campaign only when necessary:
SELECT campaign.budget - impressions.count * $impression_price AS balance
FROM campaign INNER JOIN impressions USING (campaign_id);
This is perhaps the most classical ad-serving/impression-counting problem out there. You're basically trying to balance a few goals:
Not under-serving ad inventory, thus not making as much money as you could.
Not over-serving ad inventory, thus serving for free since you can't charge the customer for your mistake.
Not serving the impressions too quickly, because usually customers want an ad to run through a given calendar time period, and serving them all in an hour between 2-3 AM makes those customers unhappy and doesn't do them any good.
This is tricky because you don't necessarily know how many impressions will be available for a given spot (since it depends on traffic), and it gets even more tricky if you do CPC instead of CPM, since you then introduce another unknowable variable of click-through rate.
There isn't a single "right" pattern for this, but what I have seen to be successful through my years of consulting is:
Treat the backend database as your authoritative store. Partition it by customer as necessary to support your goals for scalability and fault tolerance (limiting possible outages to a fraction of customers). The database knows that you have an ad insertion order for e.g. 1000 impressions over the course of 7 days. It is periodically updated (minutes to hours) to reflect the remaining inventory and some basic stats to bootstrap the cache in case of cache loss, such as actual
Don't bother with money balances at the ad server level. Deal with impression counts, rates, and targets only. Settle that to money balances after the fact through logging and offline processing.
Serve ad inventory from a very lightweight and fast cache (near the web servers) which caches the impression remaining count and target serving velocity of an insertion order, and calculates the actual serving velocity.
Log all served impressions with relevant data.
Periodically collect serving velocities and push them back to the database.
Periodically collect logs and calculate actual served inventory and push it back to the database. (You may need to recalculate from logs due to outages, DoSes, spam, etc.)
Create a service on your big & fat master MySQL server serving advertiser's ad campaign's money balances.
This service must implement a getCampaignFund(idcampaign, requestingServerId, currentLocalAccountBalanceAtTheRequestingServer) that returns a creditLimit to the regional server.
Imagine a credit card mechanism. Your master server will give some limit to your regional servers. Once this limit is decreasing, a threshold trigger this request to get a new limit. But to get the new credit limit the regional server must inform how much it had used from the previous limit.
Your regional servers might implement additionally these services:
currentLocalCampaignAccountBalance
getCampaignAccountBalance(idcampaign): to inform the current usage of a specific campaign, so the main server might update all campaigns at a specific time.
addCampaign(idcampaign, initialBalance): to register a new campaign
and it's start credit limit.
supendCampaign(idcampaign): to suspend the impressions to a
campaign.
resumeCampaign(idcampaign): to resume impression to a campaign.
currentLocalCampaignAccountBalance finishCampaign(idcampaign): to
finish a campaign and return the current local account balance.
currentLocalCampaignAccountBalance
updateCampaignLimit(idcampaign, newCampaignLimit): to update the limit
(realocation of credit between regional servers). This service will
update the campaign credit limit and return the account balance of
the previous credit limit acquired.
Services are great so you have a loosely coupled architecture. Even if your main server goes offline for some time, your regional servers will keep running until they have not finished their credit limits.
this may not be a detailed canonical answer but i'll offer my thoughts as possible [and at least partial] solutions.
i'll have to guess a bit here because the question doesn't say much about what measurements have been taken to identify mysql bottlenecks, which imho is the place to start. i say that because imho 1-2k transactions per second is not out of range for mysql. i've easily supported volumes this high [and much higher] with some combination of the following techniques, in no particular order here because it depends on what measurements tell me are the bottlenecks: 0-database redesign; 1-tuning buffers; 2-adding ram; 3-solid state drives; 4-sharding; 5-upgrading to mysql 5.6+ if on 5.5 or lower. so i'd take some measurements and apply the foregoing as called for by the results of the measurements.
hope this helps.
I assume
Ads are probably bought in batches of at least a couple of thousands
There are ads from several different batches being delivered at the same time, not all of which be near empty at the same time
It is OK to serve some extra ads if your infrastructure is down.
So, here's how I would do it.
The BigFat backend has these methods
getCurrentBatches() that will deliver a list of batches that can be used for a while. Each batch contains a rate with the number of ads that can be served each second. Each batch also contains a serveMax; how many ads might be served before talking to BigFat again.
deductAndGetNextRateAndMax(batchId, adsServed) that will deduct the number of ads served since last call and return a new rate (that might be the same) and a new serveMax.
The reason to have a rate for each batch is that when one batch is starting to run out of funds it will be served less until it's totally depleted.
If one backend doesn't connect to BigFat for a while it will reach serveMax and only serve ads from other batches.
The backends could have a report period of seconds, minutes or even hours depending on serveMax. A brand new batch with millions of impressions left can run safely for a long while before reporting back.
When BigFat gets a call to deductAndGetNextRateAndMax it deducts the number of served ads and then returns something like 75% of the total remaining impressions up to a configured max. This means that at the end of batch, if it isn't refilled, there will be some ads delivered after the batch is empty but it's better that the batch is actually depleted rather than almost depleted for a long time.

Am I charging enough? I think I may have put myself in a weird situation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Here is my situation with rough numbers. I had like to know if my thinking (at the bottom) seems sound to you guys. (Side note: I've read many of the related questions on here, and helpful as they were, none seemed to touch on this specific issue.)
For 2 years I was a senior developer at Company X. I was full-time, W-2, and making $100k/yr with benefits. (Roughly $50/hr).
[Then I got laid off, but that's not the point. I am in a large city and can find work easily. I am very happy to work from home rather than in an office.]
For 2 months I've done a few freelance projects for Company Y, a web firm. This was 1099, and I am charging $80/hr. (I did 100 or so hours over 2 or so months and figured I'd need to get some other clients soon).
Company Y loves my work and has gained new jobs because of it. They want more of my time and have offered me a 6 month contract, paid a fixed monthly rate regardless of hours (they assume 40ish per week). I'd still be working remotely.
So...
My freelance rate is higher than my old W-2 full-time rate for obvious reasons. I also realize that since freelancing "full time" requires lots of administrivia and sales, I would never really be racking up 40 hrs/wk at my $80 rate. (I've been toying with the idea of charging any other clients more, like $100/hr.)
However, I realize that from Company Y's perspective, offering me the security of a 6 month retainer contract should drive my hourly rate down (bulk discount?) since I'd now have way more billable hours and less administrivia. This still has got to be a raise on my old W-2 job for it to be worth my while though, especially due to the lack of benefits and the more complex tax situation.
Now I wish I had originally charged Company Y $100/hr for the initial freelance projects so that I could give them a better deal and charge them $80/hr for this 6 month contract.
Sorry for being so long winded, but I hope you guys get my drift. Essentially, I should be giving them a lower hourly, but I really don't want to.
Is my assumption correct that as far as hourly rates go,
full-time-W-2 < long-term 1099 < short-term-project-based 1099 ?
If so, what might a good negotiation strategy be with Company Y to keep my hourly rate as is, and effectively nix their bulk discount? "You were getting a super low rate on those individual projects!"
Company Y loves my work and has gained new jobs because of it. They want more of my time and have offered me a 6 month contract, paid a fixed monthly rate regardless of hours (they assume 40ish per week). I'd still be working remotely.
Are you sure about this? Anytime I was asked to work "fixed monthly rate" it was a none-too-subtle way of trying to get a lot of "free" hours (effectively a massive rate cut).
I don't know any consulting project where you can just quit at 40 hrs, especially if the client gets a "push" where they need stuff sooner rather than later... the urgency is always theirs, and frequently manufactured rather than "real".
So, if they want you AND want a discount, give them maybe $70/hr for an HOURLY contract over the 6 months. That way they get a discount, and you get protection from overtime and any urgency that may arise.
Anything else and you WILL get hosed. Almost guaranteed.
I'm not from the US so the W-2 and 1099 part is beyond me but I'll address the rest as those issues are pretty universal.
Generally speaking, a rule of thumb is that if you earn $100k per year you should be charging $100/hour or pretty close to it. This is to cover some or all of:
No personal/sick leave;
No paid annual leave;
No bonuses;
No training;
Insurances (health, public liability, professional indemnity, etc);
If you are not contracted to a certain number of hours per week, there is variability in income;
The employer can get rid you much more easily than a full-time employee.
Now this is my experience in Australia and Europe where you actually have quite significant public health care. I might imagine that since you don't in the US, the health insurance costs might drive this even higher so perhaps you should be asking for $120+/hour.
Note: if you're not paying things like professional indemnity insurance or you don't have some sort of legal protection (like operating through a limited liability company) you are playing with fire and I strongly urge you to seek professional advice on setting up a structure and/or obtaining relevant insurances to adequately protect you, your assets and your family if you have one.
Of course you have to balance this all out against the current market conditions, which aren't all that great (but vary from locale to locale).
I like the hourly rate scenario because it's "fair". By that I mean if you work 80 hours one week to get something out then you get paid for it. You just get paid for what you do and that's it. It's simple.
Now employers often don't like it because they can't necessarily predict (and thus budget) the costs.
The next step is to get paid a daily rate. I typically try and resist this but I will go for it in certain situations. If so, you need to define exactly what a day is.
If you work at all do you get paid for a half day? A full day?
Do you need to work a certain number of hours to get paid for the day?
Do you only ever get paid for a full day no matter how many hours you work in that day?
Can you get paid for more than 5 days a week?
Generally for this sort of situation I'll mutliply my hourly rate by 9 basing it on an 8 hour day. You're taking on some of the risk so you need to get paid for that.
Beyond that you can go to weekly and then monthly. They too have the issue of having to define what constitutes a week or month. There are on average 20 or 21 working days a month so multiply your daily rate by 21-25 to get a monthly rate.
As for a negotiation strategy, pretty much use the points listed above. If $120/hour sounds like a lot (to them) point out all the costs involved, which are also costs they're saving. Use your proven track record to your advantage because I can guarantee you that there are few things more catastrophic to a company than incompetent software development.
You could just tell them that the contract is only for 40hrs/wk max, and if they need you to go over that then it will be at your new rate of $100/hr, which may not be a problem if you gave them a discount on the first 40hrs.
Then chalk this up as a lesson learned and for any new clients change your rate. :)
6 months, 40 hours per week = almost 1000 hours, therefore for every dollar you drop from your hourly rate, you'll be discounting them $1000, so a drop of $5-6/hr should be significant enough, IMO.
Have this "discount" got into discussion as of yet? If not, the simplest solution would be just going forward with the implicit assumption of no change in payments -remember, a "discount" is something of a generousity, and not a mandatory.
I'm no expert, but I would explain to Company Y that the original rate WAS the discount. If you can convince them that you were charging the bare minimum all along instead of trying to milk more money out of them, I think they would consider that a positive.
If you were completely cool with knocking a significant percentage off of your rate, I think in the back of their minds they would think you were gaming them in the beginning.
As an analogy, say you go to a car lot. The salesman initially quotes $30,000. You come back with $20,000. He accepts without hesitation. You may actually end up with a good deal, but the salesman comes off as being shady anyway.
They want more of my time and have offered me a 6 month contract, paid a fixed monthly rate regardless of hours (they assume 40ish per week). I'd still be working remotely.
My argument would be:
I'll probably end up working over 40 hours per week. If you'd prefer a 6 month guaranteed contract, paid hourly instead of at a flat rate, we can renegotiate that.
However, I also would say that a 6 month contract is not necessarily "long term" - more "mid term".
So 1: you don't want to lower your rates, 2: they want a fixed rate.
It is likely that the fixed part is more important than the lower part to them. So call them and first talk to them and find out if that is true. If it's about being fixed, or about being cheaper. Second, explain them that you can not lower your rate. Plenty of arguments for that. Good luck!