How do I automatically interrupting long queries in the mysql, if it possible?
I'm understand, that I need optimize queries instead of. But now I have access on database server only.
While you are optimizing your queries you can interrupt them by killing the corresponding thread.
As far as I know, you can not stop a query once it has been executed.
I'm understand, that I need optimize
queries instead of.
That's the way you should go.
Other Possibility:
You may want to have a look at SQL Transactions if applicable in your case.
There is no way in mysql to limit how long query can run and then timeout. Have "long query killer" script which catches kills and reports bad queries by eg. looking at SHOW FULL PROCESSLIST time property. You can do it via stored procedure and call it on scheduled time or via application that is connecting to mysql also on scheduled basis.
Try kill long query process time:
$result = mysql_query("SHOW FULL PROCESSLIST");
while ($row=mysql_fetch_array($result)) {
$process_id = $row["Id"];
if ($row["Time"] > 200 ) {
$sql="KILL $process_id";
mysql_query($sql);
}
}
Related
Started having the following errors as the size of my database grow. It's at about 4GB now for this table with millions of rows.
Laravel cant handle large tables?
$count = DB::table('table1')->distinct('data')->count(["data"]);
$count2 = DB::table('table2')->distinct('data')->count(["data"]);
SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. (SQL: select count(distinct data) as aggregate from data)
I think the table is very large and will take some time to finish. You are running another huge query directly after which also needs a long time to fully execute. I think you need to rerun a second query when first finishes.
Please try this:
$count = DB::table('table1')->distinct('data')->count(["data"]);
if($count){
$count2 = DB::table('table2')->distinct('data')->count(["data"]);
}
But this will need a result from first the query.
Try using:
if($count>-1)
or
if(DB::table('table1')->distinct('data')->count(["data"]) >-1){
$count2 = DB::table('table2')->distinct('data')->count(["data"]);
}
For the MySQL statement this might be faster:
DB::table('table1')->select(DB::raw("SELECT COUNT (DISTINCT data)"))->count();
And best solution will be using execute as Laravel allow to use it too
Please tell me if you encounter any problems.
I have a PDO connection to a MySQL database. Making the connection is lightening fast. Subsequently I run a very complicated query 1 (using temp tables, SELECT, INSERT and other operations all subsequently) which runs extremely fast (around 0.1 second). I know this query 1 is successfully executed every time.
Much further in the code I am opening a new PDO connection to do a simple SELECT statement. This SELECT statement seemed not to be fetching any results (it will only fetch results if the complicated query 1 is successfully finished).
As I opened a connection earlier to execute the complicated query 1 I thought I would have to close that one first. I added the below code to unset the connection. This helped. However, I can see now that it takes a few minutes to run only the piece of code: unset($stmt);
$stmt = $pdo->prepare($QUERY);
$stmt->execute();
unset($stmt);
unset($pdo);
Could it be that my complicated query is running in the background while I think it is finished, but it actually isn't?
My question is: Why does executing this code: unset($stmt); take so extremely long?
The query I was executing simply took too long and was still running on the background while the php parser continued.
I have an expensive reporting query that can take 1-20+ seconds to run. (Depending on how much data it has)
Is there a way to kill a mysql process/query from running after a certain amount of time?
I see this:
mysql auto kill query
Is this the best route? I have also read that I should try to improve my queries. I will look into this too, but I am just asking for suggestions on the best route.
First run
show processlist;
then find the query that you want to kill then run
kill "1";
1 is the id of the query you want to kill you have to choose it according to the list
Is a way to prevent a single query from appearing in mysql slow query log?
One may actually disable logging before executing the query (by setting a global variable) and enable it back after the query, but this would prevent logging in other threads as well, which is not desirable.
Do you have any ideas?
In MySQL 5.1 and later, you can make runtime changes to the time threshold for which queries are logged in the slow query log. Set it to something ridiculously high and the query is not likely to be logged.
SET SESSION long_query_time = 20000;
SELECT ...whatever...
SET SESSION long_query_time = 2;
Assuming 2 is the normal threshold you use.
I don't know if you can prevet a single query from appearing in the slow query log, but you could use a grepped output from the query log. Having said that, if I remember correctly, every slow query is dumped as multiple lines so it would not be easy to grep it out, but not impossible.
mysqldumpslow has a "-g pattern" option to "Consider only queries that match the (grep-style) pattern." which may help in your situation.
I hope this helps.
Cheers
Tymek
I have a slow MySQL query in my application that I need to re-write. The problem is, it's only slow on my production server and only when it's not cached. The first time I run it, it will take 12 seconds, then any time after that it'll be 500 milliseconds.
Is there an easy way to test this query without it hitting the query cache so I can see the results of my refactoring?
MySQL supports to prevent caching single queries. Try
SELECT SQL_NO_CACHE field_a, field_b FROM table;
alternatively you can diasble the query cache for the current session:
SET SESSION query_cache_type = OFF;
See http://dev.mysql.com/doc/refman/5.1/en/query-cache.html
To add to johannes's good answer, what I do is
RESET QUERY CACHE;
This has the slight added advantage of not requiring any changes to either the statements I'm executing or the connection.
A trivial thing to do is to alter the statement you're executing somehow, such as put a random number in a comment, because a queries are located in the cache only if they are byte-identical to some previous query.