Laravel slow mysql query latency - mysql

echo "\nexec first time:";
$currentTime = microtime(true);
$users->paginate($request->length, ['*'], 'page', $request->start/$request->length + 1);
echo microtime(true) - $currentTime;
echo "\nexec second time:";
$currentTime = microtime(true);
$users->paginate($request->length, ['*'], 'page', $request->start/$request->length + 1);
echo microtime(true) - $currentTime;
The above is the code in my controller for testing the latency when query the mysql. As you can see I execute the same command twice. The execute latency is different.
exec first time: 2.7011959552765
exec second time: 0.78873896598816
The above is the output of the performance. During the Laravel document, the server provider contains the DI pattern to share the DB connection. If we are not recreate the connection then what happened is this result?
If the result is belongs to recreation, then how can I share the connection pool?

On a "cold" server, everything is on disk. A query must pull things into RAM to be performed. This typically means that the first time you run a query it takes X seconds; the second time it takes more like X/10 seconds.
Another issue is with "pagination", especially if it is done via LIMIT and OFFSET. All of the "offset" rows much be stepped over. So, as the user 'pages' through the data, the pages will come up slower and slower -- because of stepping over more and more rows. http://mysql.rjweb.org/doc.php/pagination

Related

MySQL 5.7 Unable to run query longer than 900 seconds [duplicate]

I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time in MySQL 5.7.8. (source)
If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:
<?php
// Heres an example query that will take a long time to execute.
$sql = "
select *
from information_schema.tables t1
join information_schema.tables t2
join information_schema.tables t3
join information_schema.tables t4
join information_schema.tables t5
join information_schema.tables t6
join information_schema.tables t7
join information_schema.tables t8
";
$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;
// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;
$timeStart = microtime(true);
if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
echo "query finished executing. now we start fetching the data rows over the network...\n";
$result = $mysqli->reap_async_query();
if ($result) {
while ($row = $result->fetch_row()) {
// print_r($row);
if (microtime(true) - $timeStart > 1.5) {
// we exceeded our time limit in the middle of fetching our result set.
echo "timed out while fetching results\n";
var_dump($mysqli->close());
break;
}
}
}
} else {
echo "timed out while waiting for query to execute\n";
// kill the thread to stop the query from continuing to execute on
// the server, because we are abandoning it.
var_dump($mysqli->kill($mysqli->thread_id));
var_dump($mysqli->close());
}
The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.
By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.
Note that there's two places where we need to check for exceeding a time limit:
The actual query execution
while fetching the results(data)
You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.
For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a
select * from tbl_with_1billion_rows
can start streaming rows right away, but,
select sum(foo) from tbl_with_1billion_rows
needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.
ps - I didn't include any timeout logic on the connection itself.
Please rewrite your query like
select /*+ MAX_EXECUTION_TIME(1000) */ * from table
this statement will kill your query after the specified time
You can find the answer on this other S.O. question:
MySQL - can I limit the maximum time allowed for a query to run?
a cron job that runs every second on your database server, connecting and doing something like this:
SHOW PROCESSLIST
Find all connections with a query time larger than your maximum desired time
Run KILL [process id] for each of those processes
pt_kill has an option for such. But it is on-demand, not continually monitoring. It does what #Rafa suggested. However see --sentinel for a hint of how to come close with cron.

mysql_query() keeps running after php exits

I've noticed that if I execute a long running mysql query with php using mysql_query() (I know I'm not supposed to use that) and then the php process gets killed then the query continues to run on the mysql server. This is not a persistent connection. The connection is made with:
$db = mysql_connect($host, $login, $pass, false);
$sql = 'SELECT COUNT(*) FROM `huge_table`';
$result = mysql_query($sql, $db);
For example, let's say I have a 1 billion row table and a php process does this for some reason:
SELECT COUNT(*) FROM `huge_table`
And then it times out (say because I'm running php-fpm with request_terminate_timeout=5), so it kills the process after 5 seconds to make sure it doesn't hog things.
Eventhough the process is killed, the query still runs on mysql even far after wait_timeout.
Is there anyway to make sure that if the php process exits for whatever reason it also kills any running queries that it made?
I'm using tokudb 5.5.38-tokudb-7.1.7-e which is mysql 5.5.38
crickeys, when a PHP script starts to execute and it gets to the part where it executes a MySQL query, that query is handed over to MySQL. The control of the query is no longer in PHP's hands....PHP at the point is only waiting for a response from MySQL then it can proceed. Killing the PHP script doesn't affect the MySQL query because well, the query is MySQL's business.
Put another way, PHP comes to the door, knocks, hands over the goods and waits for you to bring back a response so he can be on his way. Shooting him won't affect what's going on behind the door.
You could run something like this to retrieve the longest running processes and kill them:
<?php
$con=mysqli_connect("example.com","peter","pass","my_db");
// Check connection
if (mysqli_connect_errno()) {
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$result = mysqli_query($con,"SHOW FULL PROCESSLIST");
while($row = mysqli_fetch_array($result)) {
if ($row["Time"] > $max_excution_time ) {
$sql="KILL ".$row["Id"];
mysql_query($sql);
}
}
mysqli_close($con); ?>
Well you can use a destructor to call
mysql_close(); function.
I hope I understood your question...
You can use KILL.
KILL CONNECTION is the same as KILL with no modifier: It terminates the connection associated with the given thread_id.
KILL QUERY terminates the statement that the connection is currently executing, but leaves the connection itself intact.
You should KILL QUERY in the shutdown event, and then do a mysqli_close().
You might get some valuable information from this question about timeouts: Client times out, while MySQL query remains running?

Yii Framework - InnoDB vs MyISAM

i have a question: i have built a big application with Yii and InnoDB and came to the problem, that the insert/update durate really really long time, here is my php report:
INNODB:
admin User update 55.247464895248 seconds
ekuskov User update 13.282548904419 seconds
doriwall User update 0.002094030380249 seconds
MYISAM:
admin User update 7.8317859172821 seconds
ekuskov User update 1.6304929256439 seconds
doriwall User update 0.0020859241485596 seconds
Can anyone suggest some solution to speed up the insert/update?
EDIT ----------------------------------------------
Now i used some very simple insert loop:
public function run($args) {
$time = -microtime(true);
$begin = DateTime::createFromFormat('Y-m-d H:i:s', '2010-01-01 00:00:00');
$end = DateTime::createFromFormat('Y-m-d H:i:s', '2013-01-01 00:00:00');
$end->add(new DateInterval('P1D'));
$interval = DateInterval::createFromDateString('1 day');
$days = new DatePeriod($begin, $interval, $end);
foreach ( $days as $day ) {
echo "i";
$track = new TimeTracking();
$track->user_id = 25;
$track->date = $day->format('Y-m-d H:i:s');
$track->active = 4;
$track->save(false);
}
$time += microtime(true);
echo count($days)." items insert - $time seconds\n";
}
and now the INSERT times are following:
InnoDB: items insert - 72.269570827484 seconds
MyISAM: items insert - 0.87537479400635 seconds
[EDIT] And now i was counting time for whole SAVE method and Yii Models "save()" function:
UPDATE: model->save(false) - 0.1096498966217 seconds
UPDATE: controller save function () - 0.1302649974823 seconds
CREATE: model->save(false) - 0.052282094955444 seconds
CREATE: controller save function () - 0.057214975357056 seconds
Why just save() method takes so long?
[EDIT] I have tested save() vs command() and they durate same:
$track->save(false);
or
$command = Yii::app()->db->createCommand();
$command->insert('timeTracking', array(
'id'=>NULL,
'date'=>$track->date,
'active'=>$track->active,
'user_id'=>$track->user_id,
));
EDIT -----------------------------
And here is a statistic for inserting 1,097 Objects:
save(): 0.86-0.94,
$command->insert(): 0.67-0.72,
$command->execute(): 0.46-0.48,
mysql_query(): 0.33-0.36
FINALLY ANSWER: If you want to use some massive INSERT or UPDATE methods you should consider to create the functions with direct MYSQL Calls, there you will save almost 70% of execution time.
Regards,
Edgar
A table crawling on insert and update may indicate that you've got a bit carried away with your indexes. Remember that the DB has to stop and recompile indexes after every commit.
With Yii and InnoDB You should wrap your commands in a transaction like so:
$transaction = Yii::app()->db->beginTransaction();
try {
// TODO Loop through all of your inserts
$transaction->commit();
} catch (Exception $ex) {
// There was some type of error. Rollback the last transaction
$transaction->rollback();
}
The solution was: for those tables, where you need big inserts and quick response, convert them to MyISAM. Otherwise the user has to wait a long time and there is a threat, that the PHP Max Script Execution Time will stop your script.
InnoDB is a much more complex, feature-rich engine than MyISAM in many respects. That makes it slower and there is not much you can do in your queries, configuration, or otherwise to fix that. However, MySQL is trying to close the gap with recent updates.
Look at version 5.6:
http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html
Your best bet may be to upgrade your version of MySQL if you're behind.
InnoDB gives the option to make relations and constraints which makes the database faster and you will be able to generate models & crud with those relations.
You could also consider to break the queries into smaller ones and execute them one by one.
As MySQL specialists say, use InnoDB until you can explicitly prove that you need MyISAM.
Performance issues is not good argument.
If you have big application, you probably will face problems with table-level locks and data inconsistency. So use InnoDB.
Your performance issue may be connected to the lack of indexes or hard disk and its file system issues.
I worked with tables having hundreds of millions rows which were updated and inserted constantly from several connections. Those tables had InnoDB engine.
Of course, it you have data that should be added in a bunch, add them using one insert statement.

How can I INSERT 1 million entries to my MySQL DB?

I want to test the speed of my SQL queries (update queries) with a real "load" on my DB. I'm relatively fresh to DB's and I am doing more complex queries than I have before, and I'm getting scared by people talking about performance like "30 seconds for 3000 records to be updated" etc. So I want to have a concrete experiment showing what my performance will be in production.
To achieve this, I want to add 10k, 100k, 1M, 10M records to my DB and then run my query.
My issue is, how can I do this? I have a "name" primary key field that must be unique and be <= 15 characters and have alphanumeric entry. The other fields I want to be the same for all created entries (i.e. a "foo" field I want to start at 10000)
If there's a way to do this and get approximately 1M entries (i.e. could be name collisions) that's fine. I'm just looking for a benchmarking dataset.
If there's a better way to benchmark my query, I'm all ears. I'm planning to simply execute and see how long the query says it takes.
Edit: It's worth noting that this is for a server and has nothing to do with "The Web" so I don't have access to PHP. I'm seeing some PHP scripts to populate, is there perhaps a way to have a perl script write out all these queries and then suck them in to the command line mysql tools?
I'm not sure of how to use just MySQL to accomplish this, but if you have access to PHP, then use this:
<?php
$start = time();
$interval = 10000000; // 10M
$con = mysql_connect( 'server', 'user', 'pass' );
mysql_select_db( 'database' );
for ( $i = 0; $i < $interval; $i++ )
{
mysql_query( 'INSERT INTO TABLE (fields) VALUES (values)', $con );
}
$endt = time();
$diff = ( $endt - $start );
print( "{$interval} queries took " . date( 'g:i:s', $diff ) . " to execute." );
?>
If you want to optimize querys you should look into the EXPLAIN statement of MySQL.
To populate your database I would suggest you write your own litte PHP script or check out this one
http://www.generatedata.com
Regarding your edit:
you could generate a big text file with perl and then use the MySQL CLI to load the file into the table, for more info please see:
http://dev.mysql.com/doc/refman/5.0/en/loading-tables.html
You just want to prepopulate your database so that you have something to run your queries against, and you are not benchmarking the initial insertion process?
In that case, just generate your input data as a tab-delimited file and use mysqlimport to populate your database.

How to set a maximum execution time for a mysql query?

I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time in MySQL 5.7.8. (source)
If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:
<?php
// Heres an example query that will take a long time to execute.
$sql = "
select *
from information_schema.tables t1
join information_schema.tables t2
join information_schema.tables t3
join information_schema.tables t4
join information_schema.tables t5
join information_schema.tables t6
join information_schema.tables t7
join information_schema.tables t8
";
$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;
// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;
$timeStart = microtime(true);
if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
echo "query finished executing. now we start fetching the data rows over the network...\n";
$result = $mysqli->reap_async_query();
if ($result) {
while ($row = $result->fetch_row()) {
// print_r($row);
if (microtime(true) - $timeStart > 1.5) {
// we exceeded our time limit in the middle of fetching our result set.
echo "timed out while fetching results\n";
var_dump($mysqli->close());
break;
}
}
}
} else {
echo "timed out while waiting for query to execute\n";
// kill the thread to stop the query from continuing to execute on
// the server, because we are abandoning it.
var_dump($mysqli->kill($mysqli->thread_id));
var_dump($mysqli->close());
}
The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.
By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.
Note that there's two places where we need to check for exceeding a time limit:
The actual query execution
while fetching the results(data)
You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.
For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a
select * from tbl_with_1billion_rows
can start streaming rows right away, but,
select sum(foo) from tbl_with_1billion_rows
needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.
ps - I didn't include any timeout logic on the connection itself.
Please rewrite your query like
select /*+ MAX_EXECUTION_TIME(1000) */ * from table
this statement will kill your query after the specified time
You can find the answer on this other S.O. question:
MySQL - can I limit the maximum time allowed for a query to run?
a cron job that runs every second on your database server, connecting and doing something like this:
SHOW PROCESSLIST
Find all connections with a query time larger than your maximum desired time
Run KILL [process id] for each of those processes
pt_kill has an option for such. But it is on-demand, not continually monitoring. It does what #Rafa suggested. However see --sentinel for a hint of how to come close with cron.