long running query alternating states - mysql

I have a long running query (about 3 minutes) that upon query execution, alternates states between "sending data" and "writing to net". Why does it alternate states like that and if it's an internal request within the network how come it takes 1 minute to accomplish this for 50k rows?

Related

Why do I see spikes in response time when thread group is ending its execution?

My test goes for 3 hours.
Two particular thread groups(Ultimate Thread Group) out of 10 are set in such a way that load is generated in 3 sets by each of them. Both thread groups follow identical load generation pattern and both goes on for little less than 2 hours as shown in following picture, while rest of the threads groups continue to execute for remaining time.
But why do I see spikes in response times when these three sets are ending.
However, the response time remains low in overall duration of the test.
Similar spikes are seen in another thread group at the end of the test.
I have tried increasing the shutdown time of the thread groups from 10 seconds to 30 seconds. But no help so far. On going through details in JMeter, it was sure that when the load starts to go down or execution of the threads tends to end for the those two particular thread groups, then only we see spikes in response time.
I am using Jmeter 5.0
You are likely seeing the effect of forcibly shut down threads that are still in flight at the end of the test run. see
https://groups.google.com/forum/#!topic/jmeter-plugins/XAsUHsrJEDw
If possible, consider adding a rampdown to your test plan.
Given the very high response time of hundreds of seconds this is likely an artifact of the threads been shutdown before all the responses came back. Given the charts, i suggest using a 30-60 seconds shutdown time to insure enough padding.
I've noticed this as well using v5.1.1 with the 'Ultimate Thread Group' OR the 'Standard Thread Group' when using a schedule duration.
It occurs when using a Transaction Controller with 'Generate parent Sample' selected.
Un-checking this appears to resolve the problem. This however is not ideal as I have far too many sampler results (hence the reason clicking 'Generate parent Sample' to achieve aggregated transaction results only)
Response Times - High At End of Test
Transaction Controller - Generate Parent Sample

how is time for re-execute a google app script calculated?

let us suppose that i have a script running every 10 minutes, and now i add a code line
Utilities.sleep(30000)
in between.
will it now keep running every 10 minutes, or every 15 minutes?
Regards,
If a timed trigger is set to fire every 10 minutes, that is what it will do. It does not depend on how long the function takes to execute. In principle, you could have a 5 minute timeout inside a function that's triggered to run every 1 minute. Except that will quickly run into problems:
Total trigger-based execution time limit: 90 minutes per day
"There are too many scripts running simultaneously for this Google user account" (how many is "too many" is not documented as far as I know).

Optimizing Sql Transactions (large single transaction vs many small ones)

I'm working on a webserver. I can have an endpoint that compiles data in multiple transactions, or all in a single transaction. Which would be faster?/
Better?
The answer: It depends on the amount of data you would expect your database to return.
A: A lot of data being returned (Thousands, millions):
Suppose you are doing the next Facebook. If you are about to fetch a really enormous amount of data (2 millions of email addresses) it would probably be better to use some kind of "pagination" and fire a query every few seconds or minutes. You wouldn't want a query which waits for 10 minutes in order to get your results and keep the entire server busy.
B: Small or moderate amount of data being returned
Or, if you are about to fetch some moderate amount of data (300 cities, 523 employees and 43 phones) then you wouldn't want wasting transaction times by executing a separate SQL query for cities, employees and phones and try to use as few separate queries, as possible. This means probably using a lot of JOINs.

Inconsistent response time in mysql select query

We have lot of MySQL select queries for some reporting need. Most are little complex and they generally include 1. five-six join statements 2. three-four inner queries in select clause.
All the indexes are properly in place in the production environment. We have checked with explain query syntax multiple time and they are OK.
Some of the query behave very strangely in in terms of response time. The same query returns in less than 500 milli secs at times (which shows all index working fine), and when we run it after 1 min or so - it gives result with a high response time (varying from five-six seconds to 30 seconds.) Some time (around once in 20 times..) it gives a time out error.
This might be due to server load - but the high variance is so frequent that we think there is something else to set to solve it.
Can some one please show me some direction on what else to do!
Thanks,
Sumit
This kind of behaviour is usually caused by a bottleneck in your stack.
It's like a rotating door in a building - the door can handle 1 person at a time, and each person takes 3 seconds; as long as people don't arrive at a rate over 1 person every 3 seconds, you don't know it's a bottleneck. If people arrive at a faster rate for a short period of time, the queue grows a little but disappears quickly. If people arrive at a rate of 1 person every 2.5 seconds for an hour, the queue becomes unmanageable, and can take far longer than that 1 hour to disappear.
Your database system is made up of a long corridor with rotating doors - most doors can operate in parallel, but they are all limited.
(Sorry for the rubbish analogy, but I find it helps to visualize these things with real-world images).
If the queries are showing a high degree of variance in their performance profile, I'd look at the system performance monitor (top in Linux, Perfmon in windows) and try to correlate slow performance with the behaviour of the system. If you see a sudden spike in CPU utilization when the queries slow down, that's likely to be your bottleneck; if you see a sudden spike in disk throughput, you might look there.
Once you have a hypothesis about the bottleneck, you can look at ways of resolving them - throwing hardware at the problem is usually cheapest.

mysql heavy write and heavy read

I got an app that perform only two operations.
Produce about 300K log entries about the status of 10K hardware entities in 30 mins. ie, 1 entry / 1 entity / 1 minute
Mail to respective Admin if 4 failures occur for a particular entity ie., every 4 minutes I got retrieve 4 status entries for each 10K entities and mail, if necessary.
Now I got two tables Entity, StatusEntries with foreign key constraint. Now I put dummy entries with out checking hardware entities. Still my processor shoots up.
Should I switch to MyIsam. I tried replication in the same machine, it further shoots up the processor.
Suggest me a feasible solution to this problem.
Thanks.
300K log entries about the status of 10K hardware entities in 30 mins
About 166 INSERTs/s.
every 4 minutes I got retrieve 4 status entries for each 10K entities
About 41 simple SELECTs/s
You should not have any problems with that, it's not a very heavy load.
Can you give more details about table structure, how you do your INSERTs, your SELECTs ?
Indexes should definitely not be created for everything, only relevant indexes (those which actually speed up your queries) are ... relevant ... and worthy to pay the cost of updating them at each insert !