I'm stuck trying to clear up lag and timing issues with my Laravel site and was curious if there was a way to check the timing of connections between the host the Laravel site and its MySQL database's host.
I'm trying to eliminate possibilities of where lag comes in and I want to make sure the two being on separate hosts (though in the same local network) isn't the issue.
Since Laravel is implemented in PHP, you can do something like so.
$TimeQuerySent = microtime(true) ;
// Send Request
$TimeQueryReturned = microtime(true) ;
$TimeConsumed = $TimeQueryReturned - $TimeQuerySent ;
The foregoing takes advantage of the fact that calling microtime in this manner cause it to return a floating point number. Since time progresses forward, $TimeQueryReturned is always greater than $TimeQuerySent, and the microsecond precision should be adequate if there is more than a fleeting lag on the connection.
Of course, the above measures only the overall time consumed by the request, which includes time spent by the server executing the query.
Getting information from the server side can be achieved by adding something like the following to your query.
declare StartTime FLOAT;
set StartTime=UNIX_TIMESTAMP(UTC_TIMESTAMP(6));
select starttime,....
When the first statement is the very first statement in your stored routine or dynamically generated SQL statement, StartTime returns the time when the query began executing on the server, which is about as close as you can get to precisely when the MySQL engine started working on it. Requesting UTC_TIMESTAMP(6) causes the timestamp to be returned to the nearest microsecond, and implicitly casts it to float, while wrapping the UNIX_TIMESTAMP function around it converts the returned timestamp to a Unix timestamp. Since all three values are floating point Unix timestamps, you can compare all three times to put a finer point on whether the delay is a network lag or a query execution lag.
This can help Laravel Debugbar (Integrates PHP Debug Bar)
Related
I have a MySQL database that I am running very simple queries against as part of a webapp. I have received reports from users starting today that they got an error saying that their account doesn't exist, and when they log in again, it does (this happened to only a few people, and only once to each, so clearly it is rare). Based on my backend code, this error can only occur if the same query returns 0 row the first time, and 1 row the second. My query is basically SELECT * FROM users WHERE username="...". How is this possible? My suspicion is that the hard disk is having I/O failures, but I am unsure because I would not expect MySQL to fail silently in this case. That said, I don't know what else it could be.
This could be a bug with your mysql client (Though I'm unsure as to how the structure of your code is, it could just be bad query). However let's assume that your query has been working fine up until now with no prior issues, so we'll rule out bad code.
With that in mind, I'm assuming it's either a bug in your mysql client or your max connection count is reached (Had this issue with my previous host - Hostinger).
Let's say your issue is a bug in your mysql client, set your sessions to per session basis by running this
SET SESSION optimizer_switch="index_merge_intersection=off";
or in your my.cnf you can set it globally
[mysqld] optimizer_switch=index_merge_intersection=off
As for max connection you can either increase your max_connection value (depending if your host allows it), or you'll have to make a logic to close the mysql connection after a query execution.
$mysqli->close();
We are currently doing a lot of small queries. We execute a query, read the results, and then execute the next one. Since network requests cost a lot of time, this ping-ponging gets slow very fast.
This is why we want to do multiple queries at once, sending all data that the SQL server must know to it, and only retrieving one result (consisting of multiple result sets).
We found that Qt 5.14.1's QSqlQuery has the nextResult() function, but in the documentation (link) it says:
Some databases may execute all statements at once while others may delay the execution until the result set is actually accessed, [...].
MY QUESTION:
So, does MySql Server 8.0 delay the execution until the result set is actually accessed? If this is the case, then we still have a ping-pong for every query right? Which would be very slow still.
P.S. Our current solution to just have 1 ping-pong is to union different result sets (resulting in kind of a block diagonal matrix) with lots and lots of null values), and this question is meant to find a better way to do this.
I'm using python-mysql(MySQLdb) to query Mysql server.
There are two cursor modules: one is client cursor, such as:
cursor = db.cursor(MySQLdb.cursors.DictCursor)
Another one is server side cursor,such as:
cursor = db.cursor(MySQLdb.cursors.SSDictCursor)
The doc says Server side cursor means that Mysql would cache some results in mysql server side and then send them out to the client. I'm so confused about this, let's say, if I wanna kill one mysql server I could just use multiple server side cursors and then mysql will be dead because of memory ran out. Furthermore, does server size cursor make any sense? By default Mysql mechanism is that when mysql retrieved one record it would send it out the client immediately. Does make any sense to cache the results and then send them out?
I really don't known which cursor I should use, client cursor or server side cursor?
I'm not the greatest Database Ninja around, but often times things get built into server software that aren't really useful in the general or common cases, but are really, really awesome in that one little corner case.
Nimdil gave you one, but this is another:
http://techualization.blogspot.com/2011/12/retrieving-million-of-rows-from-mysql.html
This person asserts that SScursor is more of an "unbuffered" cursor.
This sort of seems to contradict that:
http://dev.mysql.com/doc/refman/5.7/en/cursor-restrictions.html
Anyway, it sort of seems that the use for Server Side Cursors are when you're dealing with datasets such that your query could overwhelm the client.
I believe MySQL would rather kill your cursor than crash because of few oversized cursors.
You can think of several scenarios when server side cursor makes sense. For example if you have slow network connection and the cursor is big, you can work on some small part of the data you can get quicker, possibly pass it to other system and then fetch some more. This way the overall speed of solution would be greater.
Other scenario that I can think of is when you have quite powerful database server and rather crappy machine under the client - this way in case of big dataset it would be easier for database to hold the whole set for your client while the client can micromanage memory efficiently.
There are possibly many other scenarios. If you think it doesn't make sense, just don't use it. Not all options are for every setup.
A cursor consists out of three parts:
A query
A query result
A pointer to the place until where data has been retrieved.
Depending on the query, the result can either be cached, or be retrieved in parts by the engine:
For example a query result which is usually not cached:
SELECT * FROM sometable;
MySQL (and most other DBMS) will just retrieve a row from the table every time you request a row. It can however use a table lock if you are using InnoDB and ACID compliant transactions in read committed style.
The second scenario is a query from which the result has to be cached:
SELECT * FROM sometable ORDER BY a,b,c;
In this case the MySQL (and again most other DBMS) has to get all the data in the correct order first. For this a temporary table will be created on disk in the #tmp location. This can cause disk full (translated most of the time as out of memory errors) issues and loss of a connection. MySQL however keeps running.
I have a routine in MySQL that is very long and has multiple SELECT, INSERT, and UPDATE statements in it with some IFs and REPEATs. It's been running fine until lately, where it's hanging an taking over 20 seconds to complete (which is unacceptable considering it used to take 1 second or so).
What is the quickest and easiest way for me to find out where in the routine the bottleneck is coming from? Basically the routine is getting stopped up and some point... how can I find out where that is without breaking apart the routine and testing one-by-one each section?
If you use Percona Server (a free distribution of MySQL with many enhancements), you can make the slow-query log record times for individual queries, using the log_slow_sp_statements configuration variable. See http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_55.html
If you're using stock MySQL, you can add statements in the stored procedure to set a series of session variables to the value returned by the SYSDATE() function. Use a different session variable at different points in the SP. Then after you run the SP in a test execution, you can inspect the values of these session variables to see what section of the SP took the longest.
To analyze the query can see the execution plan of the same. It is not always an easy task but with a bit of reading will find the solution. I leave some useful links
http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html
http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
http://www.lornajane.net/posts/2011/explaining-mysqls-explain
I have my production SQL Server in a remote data center(and the web servers are located in the same data center). During development we observed that one particular view takes a long time to execute (about 60-80 secs) in our local development SQL Server, and we were OK with it.It was promoted to production and when I run the same query on Production DB (which is in the data center)from my local Management Studio I see that the query takes about 7 minutes,17 secs to run (available the bottom right corner of the management studio).When I ran a profiler I see that the time taken to execute that query is 437101 microseconds milliseconds, though it shows up in management studio as 7:17. , which actually is about 437101 milliseconds. My DBA says that in prod the view takes just about 60 to 80 seconds though I see different numbers from profiler and management studio.Can someone tell me what these durations mean in Profiler and management studio ?
My guess: duration between sending the last request byte and receiving the last response byte from the server. The client statistics were as follows:
Client Processing time: 90393
Total Execution time:92221
Wait time on server replies: 1828
My best guess on what "duration" on the profiler means is "the time taken by SQL Server (optimization engine to parse the query,generate the query plan or use the existing query plan + fetch records from different pages) to generate the result set which excludes the time taken by data to travel over the wire to the client"
Edit: I find that both these times are about the same (management studio vs profiler). How do they relate with the times I see in client statistics ?
Can some one throw more light on these ?
If I'm understanding your question correctly, you are first questioning the difference between the Duration reported by Profiler and the statistics presented in SSMS (either in lower right-hand corner for general time and/or by SET STATISTICS TIME ON). In addition to that, you seem to be unconvinced of the production DBA's comment that the view is executing in the expected duration of ~60 seconds.
First, from Books Online, the statics that SSMS would report back via SET STATISTICS TIME ON:
"Displays the number of milliseconds
required to parse, compile, and
execute each statement."
You're spot-on for this. As for Duration in Profiler, it is described as:
"The duration (in microseconds) of the
event."
From where I sit, these two should be functionally equivalent (and, as I'm sure you noticed, Profiler will report in microseconds if your going against SQL 2005 or later). I say this because the "event" in this case (regarding Duration in Profiler) is the execution of the select, which includes delivery to the client; this is consistent in both cases.
It seems you suspect that geography is the culprit to the long duration when executing the query remotely. This very well may be. You can test for this by executing the select on the view in one query window then spawning another query window and reviewing the wait type on the query:
select
a.session_id
,a.start_time
,a.status
,a.command
,db_name(a.database_id) as database_name
,a.blocking_session_id
,a.wait_type
,a.wait_time
,a.cpu_time
,a.total_elapsed_time
,b.text
from sys.dm_exec_requests a
cross apply sys.dm_exec_sql_text(a.sql_handle) b
where a.session_id != ##spid;
I would suspect that you would see something like ASYNC_NETWORK_IO as the wait type if geography is the problem - otherwise, check out what does come of this. If you're Profiling the query of your remote execution, the Duration will be reflective of the time statistics you see in SSMS. HOWEVER, if you're using Profiler and finding that the duration of this query when executed from one of the web servers that sits in the same data center as the SQL Server is still taking 7 minutes, then the DBA is a big, fat liar :). I would use Profiler to record queries that take longer than 1 minute, try to filter for your view and take the average to see if you're on target for performance.
Because there are no other answers posted, I'm concerned that I'm way off base here - but it's late and I'm new to this so I thought I'd give it a go!
I was struggling with that until i found this...
http://blog.sqlauthority.com/2009/10/01/sql-server-sql-server-management-studio-and-client-statistics/
Also, if you open the Property tab for your query you may find some magical "Elapsed Time" that may give you some execution time...
Hope it helps...
Try with this:
DECLARE #time AS DATETIME = CURRENT_TIMESTAMP
-- Your Query
SELECT CAST(DATEDIFF(SECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
+ ','
+ CAST(DATEDIFF(MICROSECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
AS 'Execution Time'