I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time in MySQL 5.7.8. (source)
If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:
<?php
// Heres an example query that will take a long time to execute.
$sql = "
select *
from information_schema.tables t1
join information_schema.tables t2
join information_schema.tables t3
join information_schema.tables t4
join information_schema.tables t5
join information_schema.tables t6
join information_schema.tables t7
join information_schema.tables t8
";
$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;
// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;
$timeStart = microtime(true);
if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
echo "query finished executing. now we start fetching the data rows over the network...\n";
$result = $mysqli->reap_async_query();
if ($result) {
while ($row = $result->fetch_row()) {
// print_r($row);
if (microtime(true) - $timeStart > 1.5) {
// we exceeded our time limit in the middle of fetching our result set.
echo "timed out while fetching results\n";
var_dump($mysqli->close());
break;
}
}
}
} else {
echo "timed out while waiting for query to execute\n";
// kill the thread to stop the query from continuing to execute on
// the server, because we are abandoning it.
var_dump($mysqli->kill($mysqli->thread_id));
var_dump($mysqli->close());
}
The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.
By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.
Note that there's two places where we need to check for exceeding a time limit:
The actual query execution
while fetching the results(data)
You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.
For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a
select * from tbl_with_1billion_rows
can start streaming rows right away, but,
select sum(foo) from tbl_with_1billion_rows
needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.
ps - I didn't include any timeout logic on the connection itself.
Please rewrite your query like
select /*+ MAX_EXECUTION_TIME(1000) */ * from table
this statement will kill your query after the specified time
You can find the answer on this other S.O. question:
MySQL - can I limit the maximum time allowed for a query to run?
a cron job that runs every second on your database server, connecting and doing something like this:
SHOW PROCESSLIST
Find all connections with a query time larger than your maximum desired time
Run KILL [process id] for each of those processes
pt_kill has an option for such. But it is on-demand, not continually monitoring. It does what #Rafa suggested. However see --sentinel for a hint of how to come close with cron.
Related
I've noticed that if I execute a long running mysql query with php using mysql_query() (I know I'm not supposed to use that) and then the php process gets killed then the query continues to run on the mysql server. This is not a persistent connection. The connection is made with:
$db = mysql_connect($host, $login, $pass, false);
$sql = 'SELECT COUNT(*) FROM `huge_table`';
$result = mysql_query($sql, $db);
For example, let's say I have a 1 billion row table and a php process does this for some reason:
SELECT COUNT(*) FROM `huge_table`
And then it times out (say because I'm running php-fpm with request_terminate_timeout=5), so it kills the process after 5 seconds to make sure it doesn't hog things.
Eventhough the process is killed, the query still runs on mysql even far after wait_timeout.
Is there anyway to make sure that if the php process exits for whatever reason it also kills any running queries that it made?
I'm using tokudb 5.5.38-tokudb-7.1.7-e which is mysql 5.5.38
crickeys, when a PHP script starts to execute and it gets to the part where it executes a MySQL query, that query is handed over to MySQL. The control of the query is no longer in PHP's hands....PHP at the point is only waiting for a response from MySQL then it can proceed. Killing the PHP script doesn't affect the MySQL query because well, the query is MySQL's business.
Put another way, PHP comes to the door, knocks, hands over the goods and waits for you to bring back a response so he can be on his way. Shooting him won't affect what's going on behind the door.
You could run something like this to retrieve the longest running processes and kill them:
<?php
$con=mysqli_connect("example.com","peter","pass","my_db");
// Check connection
if (mysqli_connect_errno()) {
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$result = mysqli_query($con,"SHOW FULL PROCESSLIST");
while($row = mysqli_fetch_array($result)) {
if ($row["Time"] > $max_excution_time ) {
$sql="KILL ".$row["Id"];
mysql_query($sql);
}
}
mysqli_close($con); ?>
Well you can use a destructor to call
mysql_close(); function.
I hope I understood your question...
You can use KILL.
KILL CONNECTION is the same as KILL with no modifier: It terminates the connection associated with the given thread_id.
KILL QUERY terminates the statement that the connection is currently executing, but leaves the connection itself intact.
You should KILL QUERY in the shutdown event, and then do a mysqli_close().
You might get some valuable information from this question about timeouts: Client times out, while MySQL query remains running?
I am trying to execute SELECT ... FOR UPDATE query using Laravel 3:
SELECT * from projects where id = 1 FOR UPDATE;
UPDATE projects SET money = money + 10 where id = 1;
I have tried several things for several hours now:
DB::connection()->pdo->exec($query);
and
DB::query($query)
I have also tried adding START TRANSACTION; ... COMMIT; to the query
and I tried to separate the SELECT from the UPDATE in two different parts like this:
DB::query($select);
DB::query($update);
Sometimes I get 0 rows affected, sometimes I get an error like this one:
SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.
SQL: UPDATE `sessions` SET `last_activity` = ?, `data` = ? WHERE `id` = ?
I want to lock the row in order to update sensitive data, using Laravel's database connection.
Thanks.
In case all you need to do is increase money by 10, you don't need to lock the row before update. Simply executing the update query will do the job. The SELECT query will only slow down your script and doesn't help in this case.
UPDATE projects SET money = money + 10 where id = 1;
I would use diferent queries for sure, so you can have control on what you are doing.
I would use a transaction.
If we read this simple explanations, pdo transactions are quite straightforward. They give us this simple but complete example, that ilustrates how everithing is as we should expect (consider $db to be your DB::connection()->pdo).
try {
$db->beginTransaction();
$db->exec("SOME QUERY");
$stmt = $db->prepare("SOME OTHER QUERY?");
$stmt->execute(array($value));
$stmt = $db->prepare("YET ANOTHER QUERY??");
$stmt->execute(array($value2, $value3));
$db->commit();
}
catch(PDOException $ex) {
//Something went wrong rollback!
$db->rollBack();
echo $ex->getMessage();
}
Lets go to your real statements. For the first of them, the SELECT ..., i wouldn't use exec, but query, since as stated here
PDO::exec() does not return results from a SELECT statement. For a
SELECT statement that you only need to issue once during your program,
consider issuing PDO::query(). For a statement that you need to issue
multiple times, prepare a PDOStatement object with PDO::prepare() and
issue the statement with PDOStatement::execute().
And assign its result to some temp variable like
$result= $db->query ($select);
After this execution, i would call $result->fetchAll(), or $result->closeCursor(), since as we can read here
If you do not fetch all of the data in a result set before issuing
your next call to PDO::query(), your call may fail. Call
PDOStatement::closeCursor() to release the database resources
associated with the PDOStatement object before issuing your next call
to PDO::query().
Then you can exec the update
$result= $db->exec($update);
And after all, just in case, i would call again $result->fetchAll(), or $result->closeCursor().
If the aim is
to lock the row in order to update sensitive data, using Laravel's database connection.
Maybe you can use PDO transactions :
DB::connection()->pdo->beginTransaction();
DB::connection()->pdo->commit();
DB::connection()->pdo->rollBack();
I am using a MySQL database with phpMyAdmin as the frontend (I am not sure I have remote/client access yet). I have a script that queries the database and would like to see how long each query takes? What is the easiest way to do this? Could I install another PHP app on the server?
If you have access to MySQL config files, you can enable general query log and slow query log.
See here for details.
Since, as I think, 5.1.6, you can also do it in runtime:
SET GLOBAL long_query_time = 10 /* which queries are considered long */
SET GLOBAL slow_query_log = 1
, but try it anyway, I don't rememeber exact version when it appeared.
For most applications that I work on, I include query profiling output that can easily be turned on in the development environment. This outputs the SQL, execution time, stack trace and a link to display explain output. It also highlights queries running longer than 1 second.
Although you probably don't need something as sophisticated, you can get a fairly good sense of run time of queries by writing a function in PHP that wraps the query execution, and store debug information on the session (or simply output it). For example:
function run_query($sql, $debug=false, $output=false) {
$start = microtime(true);
$q = mysql_query($sql);
$time = microtime(true) - $start;
if($debug) {
$debug = "$sql<br/>$time<br/><br/>";
if($output) {
print $debug;
} else {
$_SESSION['sql_debug'] .= $debug;
}
}
return $q;
}
That's just kind of a rough idea. You can tweak it however you want.
Hope that helps -
You should set long_query_time to 1 since setting it to 10 will exclude most if not all of your queries.
In the mysql prompt type
set ##long_query_time=1;
We have a lot of queries
select * from tbl_message
that get stuck on the state "Writing to net". The table has 98k rows.
The thing is... we aren't even executing any query like that from our application, so I guess the question is:
What might be generating the query?
...and why does it get stuck on the state "writing to net"
I feel stupid asking this question, but I'm 99,99% sure that our application is not executing a query like that to our database... we are however executing a couple of querys to that table using WHERE statement:
SELECT Count(*) as StrCount FROM tbl_message WHERE m_to=1960412 AND m_restid=948
SELECT Count(m_id) AS NrUnreadMail FROM tbl_message WHERE m_to=2019422 AND m_restid=440 AND m_read=1
SELECT * FROM tbl_message WHERE m_to=2036390 AND m_restid=994 ORDER BY m_id DESC
I have searched our application several times for select * from tbl_message but haven't found anything... But still our query-log on our mysql server is full of Select * from tbl_message queries
Since applications don't magically generate queries as they like, I think that it's rather likely that there's a misstake somewhere in your application that's causing this. Here's a few suggestions that you can use to track it down. I'm guessing that your using PHP, since your using MySQL, so I'll use that for my examples.
Try adding comments in front of all your queries in the application, like this:
$sqlSelect = "/* file.php, class::method() */";
$sqlSelect .= "SELECT * FROM foo ";
$sqlSelect .= "WHERE criteria";
The comment will show up in your query log. If you're using some kind database api wrapper, you could potentially add these messages automatically:
function query($sql)
{
$backtrace = debug_backtrace();
// The function that executed the query
$prev = $backtrace[1];
$newSql = sprintf("/* %s */ ", $prev["function"]);
$newSql .= $sql;
mysql_query($newSql) or handle_error();
}
In case you're not using a wrapper, but rather executing the queries directly, you could use the runkit extension and the function runkit_function_rename to rename mysql_query (or whatever you're using) and intercept the queries.
There are (at least) two data retrieval modes for mysql. With the c api you either call mysql_store_result() or mysql_use_result().
mysql_store_result() returns when all result data is transferred from the MySQL server to your process' memory, i.e. no data has to be transferred for further calls to mysql_fetch_row().
However, by using mysql_use_result() each record has to be fetched individually if and when mysql_fetch_row() is called. If your application does some computing that takes longer than the time period specified in net_write_timeout between two calls to mysql_fetch_row() the MySQL server considers your connection to be timed out.
Temporarily enable the query log by putting
log=
into your my.cnf file, restart mysql and watch the query log for those mystery queries (you don't have to give the log a name, it'll assume one from the host value).
I would like to set a maximum execution time for sql queries like set_time_limit() in php. How can I do ?
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
/*+ MAX_EXECUTION_TIME(1000) */ --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Update: This variable was added in MySQL 5.7.4 and renamed to max_execution_time in MySQL 5.7.8. (source)
If you're using the mysql native driver (common since php 5.3), and the mysqli extension, you can accomplish this with an asynchronous query:
<?php
// Heres an example query that will take a long time to execute.
$sql = "
select *
from information_schema.tables t1
join information_schema.tables t2
join information_schema.tables t3
join information_schema.tables t4
join information_schema.tables t5
join information_schema.tables t6
join information_schema.tables t7
join information_schema.tables t8
";
$mysqli = mysqli_connect('localhost', 'root', '');
$mysqli->query($sql, MYSQLI_ASYNC | MYSQLI_USE_RESULT);
$links = $errors = $reject = [];
$links[] = $mysqli;
// wait up to 1.5 seconds
$seconds = 1;
$microseconds = 500000;
$timeStart = microtime(true);
if (mysqli_poll($links, $errors, $reject, $seconds, $microseconds) > 0) {
echo "query finished executing. now we start fetching the data rows over the network...\n";
$result = $mysqli->reap_async_query();
if ($result) {
while ($row = $result->fetch_row()) {
// print_r($row);
if (microtime(true) - $timeStart > 1.5) {
// we exceeded our time limit in the middle of fetching our result set.
echo "timed out while fetching results\n";
var_dump($mysqli->close());
break;
}
}
}
} else {
echo "timed out while waiting for query to execute\n";
// kill the thread to stop the query from continuing to execute on
// the server, because we are abandoning it.
var_dump($mysqli->kill($mysqli->thread_id));
var_dump($mysqli->close());
}
The flags I'm giving to mysqli_query accomplish important things. It tells the client driver to enable asynchronous mode, while forces us to use more verbose code, but lets us use a timeout(and also issue concurrent queries if you want!). The other flag tells the client not to buffer the entire result set into memory.
By default, php configures its mysql client libraries to fetch the entire result set of your query into memory before it lets your php code start accessing rows in the result. This can take a long time to transfer a large result. We disable it, otherwise we risk that we might time out while waiting for the buffering to complete.
Note that there's two places where we need to check for exceeding a time limit:
The actual query execution
while fetching the results(data)
You can accomplish similar in the PDO and regular mysql extension. They don't support asynchronous queries, so you can't set a timeout on the query execution time. However, they do support unbuffered result sets, and so you can at least implement a timeout on the fetching of the data.
For many queries, mysql is able to start streaming the results to you almost immediately, and so unbuffered queries alone will allow you to somewhat effectively implement timeouts on certain queries. For example, a
select * from tbl_with_1billion_rows
can start streaming rows right away, but,
select sum(foo) from tbl_with_1billion_rows
needs to process the entire table before it can start returning the first row to you. This latter case is where the timeout on an asynchronous query will save you. It will also save you from plain old deadlocks and other stuff.
ps - I didn't include any timeout logic on the connection itself.
Please rewrite your query like
select /*+ MAX_EXECUTION_TIME(1000) */ * from table
this statement will kill your query after the specified time
You can find the answer on this other S.O. question:
MySQL - can I limit the maximum time allowed for a query to run?
a cron job that runs every second on your database server, connecting and doing something like this:
SHOW PROCESSLIST
Find all connections with a query time larger than your maximum desired time
Run KILL [process id] for each of those processes
pt_kill has an option for such. But it is on-demand, not continually monitoring. It does what #Rafa suggested. However see --sentinel for a hint of how to come close with cron.