MySQL Workbench Query Stats: Server and Client Timing? - mysql

I'm running queries from the MySQL Workbench and it allows you to see the stats for each query. Part of these stats are "Timing (as measured at the client side)" and "Timing (as measured by the server side)". I've included an example of what this output looks like.
Timing stats from MySQL Workbench
I'm wondering why the timing on the server side is longer than on the client side. I may be completely wrong but I thought that the client side takes into account server time and a latency until the information is outputted, which would make the time measured by the client side longer.
I'm new to this and not very familiar with execution timing but the workbench manual didn't offer much help and assumed I already understood what the values meant and how they worked. Any help is appreciated!

I can't confirm this with documentation but running a quick test on a really large query and playing with the number of rows returned to the client results grid provided a possible insight:
Dont Limit:
Timing (as measured at client side): Execution time: 0:00:0.77752995
Timing (as measured by the server): Execution time: 0:00:7.46805535
Table lock wait time: 0:00:0.00018100
Duration / Fetch time in "Action Output" pane: 0.778s / 7.723 sec
10 Rows
Timing (as measured at client side): Execution time: 0:00:0.38576984
Timing (as measured by the server): Execution time: 0:00:0.00058682
Table lock wait time: 0:00:0.00018400
Duration / Fetch in "Action Output" pane: 0.386 / 0.00002 sec
It just makes sense to me that the server measures from the time the client started asking for records to the time it stopped, and the client measures the time it takes the server to generate the number of records it needs. Perhaps to return an "accurate" execution time is to set the rows to return to "No Limit" and check the "Duration" in the "Action Output" pane.
I've posted on the MySQL forum, hoping to get an explanation, will post back here if forthcoming.

Related

MySQL 8 why is initial response to query slow, speeds up thereafter

After several minutes of inactivity (no use of the website) MySQL 8 slow right down. An initial query after non-activity can take a minute but thereafter seconds. The same query (like logging in) would take a second or two if there was activity already on the server.
Has anyone encountered this or know how to correct this behavior? The machine itself has a significant amount of resources, its just the first "warm up" call that is slow.

MySQL query takes 10X time every once in a while

I am working on this issue where a MySQL 'SELECT' query which usually completes in 2 minutes, but takes more than 25 minutes every once in a while (once in ten executions).
Could this be a:
1: Index issue - If this were, then the query would take equivalent time for every execution
2: Resource crunch - The CPU utilization did go up near 60-70% when this query gets stuck (this is usually around 40%)
3: Table lock issue - The logs say that the table was only locked for 10 ms. I do not know how to check for this.
Please suggest which issue appears most likely.
Thanks in advance...
Edit: Required information
Total rows: 40,000,000
Can't post the query or the schema (Will get fired)
Anyway, I just wanted to know the general analysis techniques.

SSRS Report Timing out in Production Server (except after refreshing 3 times)

The report works fine in the DEV and QA server but when placed in Production the following error comes up:
An error occurred during client rendering.
An error has occurred during report processing.
Query execution failed for dataset 'Registration_of_Entity'.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The strange part was that the Admins have assured me that this report has now been set so there is no timeout at all.
Refresh the report 3 times every morning and the error message goes away.
What can I do to fix this issue so that the report never receives this error?
There are several steps to resolve correctly this issue.
I advise following them in the following order:
1. Reduce the query execution time
Execute the query of the DataSet Registration_of_Entity in SSMS and see how long it takes to complete.
If your query requires more time to execute than the timeout specified for the DataSet, you should first try to reduce this time, for example:
Change the query structure (rethink joins, use CTEs, ...)
Add indexes
Looking at the execution plan can help.
2. Reduce the query complexity
Do you need all those rows/columns?
Do you need to have all these calculations on the database side?
Could it be done in the report instead?
You could try to:
Reduce the query complexity
Split the query in smaller queries
Again, looking at the execution plan can help.
3. Explore additional optimizations not related to the query itself
You really need this query, but do you need the data real-time?
Are there a lot of other queries being executed on this server?
You could look into:
Caching
Replication / Load Balancing
Note that from SSRS 2008 R2, the new Shared DataSets can be cached. I
know it doesn't apply in your case but who knows, it could help
others.
4. Last resort
If all the above steps failed to solve the issue, then you can increase the timeouts.
Here is a link to a blog post explaining the different timeouts and how to increase them.
Do you know if your query is becoming deadlocked? It could be that the report gets blocked on the server during peak times.
Consider optimizing your query or, if the data can be read uncommitted, add WITH (NOLOCK) after each FROM and Join Clause. Be sure to google WITH(NOLOCK) if you are unfamiliar with it so you know what read uncommitted can do.

How to improve the mysql "fetch" time of a result set?

I have a query that is a large data set needed for reporting purposes. Currently, the "duration" showing in MySQL workbench which I'm assuming to be execution time is about 7 seconds, so it is fairly optimized. It returns a measly 6000 rows, but it takes nearly 150 seconds to return them according to the "fetch" time.
Now, there are over 50 columns, which may explain some of the speed, but when I extracted the data set into a spreadsheet, it turned out to be about 4MB. I'm certainly not an expert, but I didn't expect 4MB to take 150 seconds to return over the pipe. I went ahead and performed the same query on a localhost setup to eliminate networking issues. Same result! It took about 7 seconds to execute, and 150 seconds to return the data on the same machine.
This report is expected to run real-time on demand, so having the end user wait 2 minutes is unacceptable for this use case. How can I improve the time it takes to return the data from MySQL?
UPDATE: Thank you all for starting to point me in the right direction. As it turns out, the "duration" and "fetch" in workbench is horribly inaccurate. The two minutes I was experiencing was all execution time and in fact my query needed optimizing. Thanks again, this was scratching my head. I will never rely on these metrics again...

Query Execution time in Management Studio & profiler. What does it measure?

I have my production SQL Server in a remote data center(and the web servers are located in the same data center). During development we observed that one particular view takes a long time to execute (about 60-80 secs) in our local development SQL Server, and we were OK with it.It was promoted to production and when I run the same query on Production DB (which is in the data center)from my local Management Studio I see that the query takes about 7 minutes,17 secs to run (available the bottom right corner of the management studio).When I ran a profiler I see that the time taken to execute that query is 437101 microseconds milliseconds, though it shows up in management studio as 7:17. , which actually is about 437101 milliseconds. My DBA says that in prod the view takes just about 60 to 80 seconds though I see different numbers from profiler and management studio.Can someone tell me what these durations mean in Profiler and management studio ?
My guess: duration between sending the last request byte and receiving the last response byte from the server. The client statistics were as follows:
Client Processing time: 90393
Total Execution time:92221
Wait time on server replies: 1828
My best guess on what "duration" on the profiler means is "the time taken by SQL Server (optimization engine to parse the query,generate the query plan or use the existing query plan + fetch records from different pages) to generate the result set which excludes the time taken by data to travel over the wire to the client"
Edit: I find that both these times are about the same (management studio vs profiler). How do they relate with the times I see in client statistics ?
Can some one throw more light on these ?
If I'm understanding your question correctly, you are first questioning the difference between the Duration reported by Profiler and the statistics presented in SSMS (either in lower right-hand corner for general time and/or by SET STATISTICS TIME ON). In addition to that, you seem to be unconvinced of the production DBA's comment that the view is executing in the expected duration of ~60 seconds.
First, from Books Online, the statics that SSMS would report back via SET STATISTICS TIME ON:
"Displays the number of milliseconds
required to parse, compile, and
execute each statement."
You're spot-on for this. As for Duration in Profiler, it is described as:
"The duration (in microseconds) of the
event."
From where I sit, these two should be functionally equivalent (and, as I'm sure you noticed, Profiler will report in microseconds if your going against SQL 2005 or later). I say this because the "event" in this case (regarding Duration in Profiler) is the execution of the select, which includes delivery to the client; this is consistent in both cases.
It seems you suspect that geography is the culprit to the long duration when executing the query remotely. This very well may be. You can test for this by executing the select on the view in one query window then spawning another query window and reviewing the wait type on the query:
select
a.session_id
,a.start_time
,a.status
,a.command
,db_name(a.database_id) as database_name
,a.blocking_session_id
,a.wait_type
,a.wait_time
,a.cpu_time
,a.total_elapsed_time
,b.text
from sys.dm_exec_requests a
cross apply sys.dm_exec_sql_text(a.sql_handle) b
where a.session_id != ##spid;
I would suspect that you would see something like ASYNC_NETWORK_IO as the wait type if geography is the problem - otherwise, check out what does come of this. If you're Profiling the query of your remote execution, the Duration will be reflective of the time statistics you see in SSMS. HOWEVER, if you're using Profiler and finding that the duration of this query when executed from one of the web servers that sits in the same data center as the SQL Server is still taking 7 minutes, then the DBA is a big, fat liar :). I would use Profiler to record queries that take longer than 1 minute, try to filter for your view and take the average to see if you're on target for performance.
Because there are no other answers posted, I'm concerned that I'm way off base here - but it's late and I'm new to this so I thought I'd give it a go!
I was struggling with that until i found this...
http://blog.sqlauthority.com/2009/10/01/sql-server-sql-server-management-studio-and-client-statistics/
Also, if you open the Property tab for your query you may find some magical "Elapsed Time" that may give you some execution time...
Hope it helps...
Try with this:
DECLARE #time AS DATETIME = CURRENT_TIMESTAMP
-- Your Query
SELECT CAST(DATEDIFF(SECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
+ ','
+ CAST(DATEDIFF(MICROSECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
AS 'Execution Time'