How can we optimize the delete query.
delete FROM student_score
WHERE lesson_id IS NOT null
AND id NOT IN(SELECT MaxID FROM temp)
ORDER BY id
LIMIT 1000
This select statement return "SELECT MaxID FROM temp" 35k lines and temp is a temporary table.
and select * FROM student_score WHERE
lesson_id IS NOT null return around 500k rows
I tried using limit and order by clause but doesn't result in faster ways
IN(SELECT...)` is, in many situations, really inefficient.
Use a multi-table DELETE. This involves a LEFT JOIN ... IS NULL, which is much more efficient.
Once you have mastered that, you might be able to get rid of the temp and simply fold it into the query.
Also more efficient is
WHERE NOT EXISTS ( SELECT 1 FROM temp
WHERE student_score.lesson_id = temp.MAXID )
Also, DELETEing a large number of rows is inherently slow. 1000 is not so bad; 35K is. The reason is the need to save all the potentially-deleted rows until "commit" time.
Other techniques for big deletes: http://mysql.rjweb.org/doc.php/deletebig
Note that one of then explains a more efficient way to walk through the PRIMARY KEY (via id). Note that your query may have to step over lots of ids that have lesson_id IS NULL. That is, the LIMIT 1000 is not doing what you expected.
You can do it without order by :
DELETE FROM student_score
WHERE lesson_id IS NOT null
AND id NOT IN (SELECT MaxID FROM temp)
Or like this using left join which is more optimized in term of speed :
DELETE s
FROM student_score s
LEFT JOIN temp t1 ON s.id = t1.MaxID
WHERE lesson_id IS NOT null and t1.MaxID is null;
Related
I am looking to optimize below query which has a subquery from relation table and has a order by on subquery count data. Please see the below query:
SELECT table1.*,
( SELECT COUNT(*)
FROM table2
WHERE table2.user_id=table1.id
AND table2.deleted = 0) AS table2_total
FROM table1
WHERE table1.parent_id = 0
ORDER BY table2_total DESC LIMIT 0, 50
This query works well but it stuck when table2 has more than 50K data. I have also tried to use left join instead of sub query but that is even more slower:
SELECT table1.*,
COUNT(DISTINCT table2.id) as table2_total
FROM table1
LEFT JOIN table2 ON table2.user_id=table1.id
AND table2.deleted = 0
WHERE table1.parent_id = 0
ORDER BY table2_total DESC LIMIT 0, 50
table2 already has indexes on user_id and deleted column. Please see below table2 structure:
Is there any way to optimize this query in better way?
As written, it will go through the entirety of table1, and probe table2 that many times.
Add this composite index to table2: INDEX(user_id, deleted) and remove the INDEX(user_id) that you currently seem to have.
You can try to add index to the column table2.deleted And table1.parent_id. The index is going to impact the performance of the insert .
I am deleting duplicate rows on MySQL and only leaving behind the old row (least id) but I am getting a max row error
DELETE n1
FROM item_audit n1, item_audit n2
WHERE n1.id > n2.id AND n1.description = n2.description
Keep in mind, with that join condition you are joining each row to every row before it (with the same description). This is one of those cases where a subquery will be much more effective than a join.
DELETE a
FROM item_audit a
WHERE (a.id, a.description) NOT IN
(SELECT * FROM
(
SELECT MIN(id), description
FROM item_audit
GROUP BY description
) AS realSubQ
)
Actually, assuming id is unique, it can even be simplier:
DELETE a
FROM item_audit a
WHERE a.id NOT IN
(SELECT * FROM
( SELECT MIN(id)
FROM item_audit
GROUP BY description
) AS realSubQ
)
As you discovered, MySQL needs to be "tricked" into being able to use the delete target in a subquery with the extra select * wrapper.
Alternatively, a join on the subquery could be used to reduce the size of the intermediate result set created behind the scenes.
DELETE a
FROM item_audit a
LEFT JOIN (SELECT MIN(id) AS firstId FROM item_audit GROUP BY description) AS aFirst
ON a.id = aFirst.firstId
WHERE aFirst.firstId IS NULL
;
If that fails, you can insert the first id's into a temp table, and should be able to do subquery version with that.
CREATE TEMPORARY TABLE `old_ids`
SELECT MIN(ID) AS id
FROM item_audit
GROUP BY description;
DELETE a
FROM item_audit a
LEFT JOIN old_ids ON a.id = old_ids.id
WHERE old_ids.id IS NULL
;
In any of these cases, a LIMIT clause can be placed very last to accomplish an incremental delete. The last, temp table, version has the benefit that the subquery will not need re-evaluated after every incremental delete (and the temporary table can be indexed to speed things up as well).
I get a complicate query:
SELECT * FROM
(
SELECT Transaction
FROM table1
WHERE
Transaction IN (SELECT Transaction FROM table2 WHERE Plugin='XXX' AND Server='XXX')
AND
Transaction NOT IN (SELECT Transaction FROM table1 WHERE Detail IN ('Monitor','Version','monitor','version'))
ORDER BY Date DESC, Millisecond DESC LIMIT 10)
AS res
I get indexes on table1:Detail and the "Transaction" is the primary key of table2.
It will take a while(5-10 secs) for the database to return result. So I create another index on table2:Plugin, the query is fasted now, but a preparing state shows up and also takes 5-10 secs. So after I create a new index, the time does not change at all.
Can someone tell me what`s going on and how can I optimize this query? Thank you!
Could you not simply rewrite the query as follows:
SELECT a.Transaction
FROM table1 a
INNER JOIN table2 b ON b.Transaction = a.Transaction
WHERE (b.Plugin='XXX' AND b.Server='XXX')
AND a.Detail NOT IN ('Monitor','Version','monitor','version')
ORDER BY a.Date DESC, a.Millisecond DESC LIMIT 10
So you join the table2 (which will be faster) and remove all the subqueries.
This should be much faster.
I have a MySQL Query that i need to optimize as much as possible (should have a load time below 5s, if possible)
Query is as follow:
SELECT domain_id, COUNT(keyword_id) as total_count
FROM tableName
WHERE keyword_id IN (SELECT DISTINCT keyword_id FROM tableName WHERE domain_id = X)
GROUP BY domain_id
ORDER BY total_count DESC
LIMIT ...
X is an integer that comes from an input
domain_id and keyword_id are indexed
database is on localhost, so the network speed should be max
The subquery from the WHERE clause can get up to 10 mil results. Also, for MySQL seems really hard to calculate the COUNT and ORDER BY this count.
I tried to mix this query with SOLR, but no results, getting such a high number of rows at once gives hard time for both MySQL and SOLR
I'm looking for a solution to have the same results, no matter if i have to use a different technology or an improvement to this MySQL query.
Thanks!
Query logic is this:
We have a domain and we are searching for all the keywords that are being used on that domain (this is the sub query). Then we take all the domains that use at least one of the keywords found on the first query, grouped by domain, with the number of keywords used for each domain, and we have to display it ordered DESC by the number of keywords used.
I hope this make sense
You may try JOIN instead of subquery:
SELECT tableName.domain_id, COUNT(tableName.keyword_id) AS total_count
FROM tableName
INNER JOIN tableName AS rejoin
ON rejoin.keyword_id = tableName.keyword_id
WHERE rejoin.domain_id = X
GROUP BY tableName.domain_id
ORDER BY tableName.total_count DESC
LIMIT ...
I am not 100% sure but can you try this please
SELECT t1.domain_id, COUNT(t1.keyword_id) as total_count
FROM tableName AS t1 LEFT JOIN
(SELECT DISTINCT keyword_id FROM tableName WHERE domain_id = X) AS t2
ON t1.keyword_id = t2.keyword_id
WHERE t2.keyword_id IS NTO NULL
GROUP BY t1.domain_id
ORDER BY total_count DESC
LIMIT ...
The goal is to replace the WHERE IN clause with INNER JOIN and that will make it lot quicker. WHERE IN clause always make the Mysql server to struggle, but it is even more noticeable when you do it with huge amount of data. Use WHERE IN only if it make you query look easier to be read/understood, you have a small data set or it is not possible in another way (but you probably will have another way to do it anyway :) )
In terms of MySQL all you can do is to minimize Disk IO for the query using covering indexes and rewrite it a little more efficient so that the query would benefit from them.
Since keyword_id has a match in another copy of the table, COUNT(keyword_id) becomes COUNT(*).
The kind of subqueries you use is known to be the worst case for MySQL (it executes the subquery for each row), but I am not sure if it should be replaced with a JOIN here, because It might be a proper strategy for your data.
As you probably understand, the query like:
SELECT domain_id, COUNT(*) as total_count
FROM tableName
WHERE keyword_id IN (X,Y,Z)
GROUP BY domain_id
ORDER BY total_count DESC
would have the best performance with a covering composite index (keyword_id, domain_id [,...]), so it is a must. From the other side, the query like:
SELECT DISTINCT keyword_id FROM tableName WHERE domain_id = X
will have the best performance on a covering composite index (domain_id, keyword_id [,...]). So you need both of them.
Hopefully, but I am not sure, when you have the latter index, MySQL can understand that you do not need to select all those keyword_id in the subquery, but you just need to check if there is an entry in the index, and I am sure that it is better expressed if you do not use DISTINCT.
So, I would try to add those two indexes and rewrite the query as:
SELECT domain_id, COUNT(*) as total_count
FROM tableName
WHERE keyword_id IN (SELECT keyword_id FROM tableName WHERE domain_id = X)
GROUP BY domain_id
ORDER BY total_count DESC
Another option is to rewrite the query as follows:
SELECT domain_id, COUNT(*) as total_count
FROM (
SELECT DISTINCT keyword_id
FROM tableName
WHERE domain_id = X
) as kw
JOIN tableName USING (keyword_id)
GROUP BY domain_id
ORDER BY total_count DESC
Once again you need those two composite indexes.
Which one of the queries is quicker depends on the statistics in your tableName.
I have a table which counts occurrences of one specific action by different users on different objects:
CREATE TABLE `Actions` (
`object_id` int(10) unsigned NOT NULL,
`user_id` int(10) unsigned NOT NULL,
`actionTime` datetime
);
Every time a user performs this action, a row is inserted. I can count how many actions were performed on each object, and order objects by 'activity':
SELECT object_id, count(object_id) AS action_count
FROM `Actions`
GROUP BY object_id
ORDER BY action_count;
How can I limit the results to the top n objects? The LIMIT clause is applied before the aggregation, so it leads to wrong results. The table is potentially huge (millions of rows) and I probably need to count tens of times per minute, so I'd like to do this as efficient as possible.
edit: Actually, Machine is right, and I was wrong with the time at which LIMIT is applied. My query returned the correct results, but the GUI presenting them to me threw me off...this kind of makes this question pointless. Sorry!
Actually... LIMIT is applied last, after a eventual HAVING clause. So it should not give you incorrect results. However, since LIMIT is applied last, it will not provide any faster execution of your query, since a temporary table will have to be created and sorted in order of action count before chopping off the result. Also, remember to sort in descending order:
SELECT object_id, count(object_id) AS action_count
FROM `Actions`
GROUP BY object_id
ORDER BY action_count DESC
LIMIT 10;
You could try adding an index to object_id for optimization. In that way only the index will need to be scanned instead of the Actions table.
How about:
SELECT * FROM
(
SELECT object_id, count(object_id) AS action_count
FROM `Actions`
GROUP BY object_id
ORDER BY action_count
)
LIMIT 15
Also, if you have some measure of what must be the minimum number of actions to be included (e.g. the top n ones are surely more than 1000), you can increase the efficiency by adding a HAVING clause:
SELECT * FROM
(
SELECT object_id, count(object_id) AS action_count
FROM `Actions`
GROUP BY object_id
HAVING action_count > 1000
ORDER BY action_count
)
LIMIT 15
I know this thread is 2 years old but stackflow still finds it relevant so here goes my $0.02. ORDER BY clauses are computationally very expensive so they should be avoided in large tables. A trick I used (in part from Joe Celko's SQL for Smarties) is something like:
SELECT COUNT(*) AS counter, t0.object_id FROM (SELECT COUNT(*), actions.object_id FROM actions GROUP BY id) AS t0, (SELECT COUNT(*), actions.object_id FROM actions GROUP BY id) AS t1 WHERE t0.object_id < t1.object_id GROUP BY object_id HAVING counter < 15
Will give you the top 15 edited objects without sorting. Note that as of v5, mysql will only cache result sets for exactly duplicate (whitespace incl) queries so the nested query will not get cached. Using a view would resolve that problem.
Yes, it's three queries instead of two and and the only gain is the not having to sort the grouped query but if you have a lot of groups, it will be faster.
Side note: the query is really handy for median functions w/o sorts
SELECT * FROM (SELECT object_id, count(object_id) AS action_count
FROM `Actions`
GROUP BY object_id
ORDER BY action_count) LIMIT 10;