delete from tablename where id in ('val1','val2',....,'val6000');
'id' is a FK column [char(50)]
IN query clause contains 6000 values
after 12 min 10 seconds, 1270000 rows are deleted from table 'tablename'.
tablename contains 2 indexed cols apart from a PK and a FK ('id' is FK col).
My doubt is, what takes more time, Is it the time to delete 1270000 rows? or parsing of that where clause to determine the rows to be deleted?
And is there a way to reduce this execution time?
Please check the configuration of your MySQL Server for
innodb_buffer_pool_size
Change it some bigger value in case it's small - like 2 GB and try again.
Maybe you have issues to fit your data in the current buffer_pool.
Related
If I have a table TableA with 10k rows and I want to search all rows where id > 8000
When I use the SQL statement SELECT * FROM TableA WHERE id > 8000 to search them, what will MySQL do? Will it search 10k rows and return the 2k rows that match the condition or just ignore those 8k rows and return the 2k rows of data?
I also have a requirement to store a lot of data in the database per day and need to have a quick search for today records. Is one big table still the best method or are other solutions available?
Or would it be best to create 2 tables. 1 for the all records and 1 for today's records and when the new data coming, both table will insert but in the next day the record of the second table will delete.
Which method are better when comparing the speed of select or any other good method can for this case?
Actually i don't have the real database here now but i just worry
about which way/method can be better in that case
Updated information below at (8-12-2016 11:00)
I am using InnoDB but i will use the date as the search key and it is not a PK.
Returning 2k rows is just a extremely case for study but in the real case may returning (User Numbers * each record for that User), so if i got 100 user and they make 10 record in that day, i may need to returning 1k rows record.
My real case is i need to store all user records per days (maybe 10 records per 1 user) and i need to generate the rank for the last day records and the last 7 days records so i just worry if i just search the last day records in a large table, would it be slow or create another table just for save the last day records?
Are you fetching more than about 20% of the table? (The number 20% is inexact.)
Is the PRIMARY KEY on id? Or is it a secondary key?
Are you using ENGINE=InnoDB?
Case: InnoDB and PRIMARY KEY(id): The execution will start at 8000 and go until finished. This is optimal
Case: InnoDB, id is a secondary key, and a 'small' percentage of table is being fetched: The index will be used; it is a BTree and is scanned from 8000 to end, jumping over to the data (via the PK) to find the rows.
Case: InnoDB, id is secondary, and large percentage: The index will be ignored, and the entire table will be scanned ("table scan"), ignoring rows that don't match the WHERE clause. A table scan is likely to be faster than the previous case because of all the 'jumping over to the data'.
Other comments:
10K rows is "small" as tables go.
returning 2K rows is "large" as result sets go. What are you doing with them?
Is there further filtering that you could turn over to MySQL, so that you don't get all 2K back? Think COUNT or SUM with GROUP BY, FULLTEXT search index, etc.
More tips on indexes.
I've been running a website, with a large amount of data in the process.
A user's save data like ip , id , date to the server and it is stored in a MySQL database. Each entry is stored as a single row in a table.
Right now there are approximately 24 million rows in the table
Problem 1:
Things are getting slow now, as a full table scan can take too many minutes but I already indexed the table.
Problem 2:
If a user is pulling a select data from table it could potentially block all other users (as the table is locked) access to the site until the query is complete.
Our server
32 Gb Ram
12 core with 24 thread cpu
table use MyISAM engine
EXPLAIN SELECT SUM(impresn), SUM(rae), SUM(reve), `date` FROM `publisher_ads_hits` WHERE date between '2015-05-01' AND '2016-04-02' AND userid='168' GROUP BY date ORDER BY date DESC
Lock to comment from #Max P. If you write to MyIsam Tables ALL SELECTs are blocked. There is only a Table lock. If you use InnoDB there is a ROW Lock that only locks the ROWs they need. Aslo show us the EXPLAIN of your Queries. So it is possible that you must create some new one. MySQL can only handle one Index per Query. So if you use more fields in the Where Condition it can be useful to have a COMPOSITE INDEX over this fields
According to explain, query doesn't use index. Try to add composite index (userid, date).
If you have many update and delete operations, try to change engine to INNODB.
Basic problem is full table scan. Some suggestion are:
Partition the table based on date and dont keep more than 6-12months data in live system
Add an index on user_id
Okay so here are my table schemas.
I have 2 tables. Say Table A and Table B. The primary key of Table A is PriKeyA bigint(50) and primary key of Table B is PriKeyB varchar(255). Both PriKeyA and PriKeyB contain the same type of data.
The relevant fields of Table A required for this problem are Last_login_date_in_A (date) and Table B is the primary key itself.
What I need to do is, get those PriKeyA's in A which are not there in Table B's PriKeyB column and the Last_login_date_in_A column should be greater than 30 days from the current date. Basically I need the difference of Table A and Table B along with a certain condition(which is the date in this problem)
Here is my SQL command
: SELECT A.PriKeyA from A
LEFT JOIN B ON A.PriKeyA = B.PriKeyB
WHERE B.PriKeyB IS NULL and DATEDIFF(CURRENTDATE,Last_login_date_in_A)>30;
However when I run this MySQL command, it takes about ridiculously long amount of time (About 3 hours). The size of Table A is 2,50,000 and Table B is 42,000 records respectively. I thought that this problem could arise due to the fact that PriKeyA and PriKeyB are different datatypes. So i also used the CAST(PriKeyB as unsigned) in the query. But that too didn't work. There was a marginal performance improvement.
What could be the possible problems? I've used Left Joins before and they never have taken this long.
The expense of the query appears to be for these reasons:
The SQL datatype for A's PK and B's PK aren't the same.
Table A probably doesn't have an index on Last_login_date_in_A
What this means is that ALL rows in table A MUST be examined one row at a time in order to determine if the > 30 days ago criteria is true. This is especially true if A has 2,500,000 rows (as evidenced by how you placed your commas in A's row count) instead of 250,000.
Adding an index on Last_login_date_in_A might help you out here, but will also slightly slow down insert/update/delete statement times for the table due to needing to update the additional index.
Additionally, you should utilize the documentation for explaining MySQL's actual chosen query plan for your query at: MySQL query plan documentation
I have a table 'tbl' something like that:
ID bigint(20) - primary key, autoincrement
field1
field2
field3
That table has 600k+ rows.
Query:
SELECT * from tbl ORDER by ID LIMIT 600000, 1 takes 1.68 second
Query:
SELECT ID, field1 from tbl ORDER by ID LIMIT 600000, 1 takes 1.69 second
Query:
SELECT ID from tbl ORDER by ID LIMIT 600000, 1 takes 0.16 second
Query:
SELECT * from tbl WHERE ID = xxx takes 0.005 second
Those queries are tested in phpmyadmin.
And the result is query 3 and query 4 together return necessarily data.
Query 1 does the same jobs but much slower...
This doesn't look right for me.
Could anyone give any advice?
P.S. I'm sorry for formatting.. I'm new to this site.
New test:
Q5 : CREATE TEMPORARY TABLE tmptable AS (SELECT ID FROM tbl WHERE ID LIMIT 600030, 30);
SELECT * FROM tbl WHERE ID IN (SELECT ID FROM tmptable); takes 0.38 sec
I still don't understand how it's possible. I recreated all indexes.. what else can I do with that table? Delete and refill it manually? :)
Query 1 looks at the table's primary key index, finds the correct 600,000 ids and their corresponding locations within the table, then goes to the table and fetches everything from those 600k locations.
Query 2 looks at the table's primary key index, finds the correct 600k ids and their corresponding locations within the table, then goes to the table and fetches whichever subset of fields are asked for from those 600k rows.
Query 3 looks at the table's primary key index, finds the correct 600k ids, and returns them. It doesn't need to look at the table at all.
Query 4 looks at the table's primary key index, finds the single entry requested, goes to the table, reads that single entry, and returns it.
Time-wise, let's build backwards:
(Q4) The table index allows lookup of a key (id) in O(log n) time, meaning every time the table doubles in size it only takes one extra step to find the key in the index*. If you have 1 million rows, then, it would only take ~20 steps to find it. A billion rows? 30 steps. The index entry includes data on where in the table to go to find the data for that row, so MySQL jumps to that spot in the table and reads the row. The time reported for this is almost entirely overhead.
(Q3) As I mentioned, the table index is very fast; this query finds the first entry and just traverses the tree until it has the requested number of rows. I'm sure I could calculate the precise number of steps it would take, but as a maximum we'll say 20 steps x 600k rows = 12M steps; since it's traversing a tree it would likely be more like 1M steps, but the precise number is largely irrelevant. The most important thing to realize here is that once MySQL has walked the index to pull the ids it needs, it has everything you asked for. There's no need to go look at the table. The time reported for this one is essentially the time it takes MySQL to walk the index.
(Q2) This begins with the same tree-walking as discussed for query 3, but while pulling the IDs it needs, MySQL also pulls their location within the table files. It then has to go to the table file (probably already cached/mmapped in memory), and for every entry it pulled, seek to the proper place in the table and get the fields requested out of those rows. The time reported for this query is the time it takes to walk the index (as in Q3) plus the time to visit every row specified in the index.
(Q1) This is identical to Q2 when all fields are specified. As the time is essentially identical to Q2, we can see that it doesn't really take measurably more time to pull more fields out of the database, any time there is dwarfed by crawling the index and seeking to the rows.
*: Most databases use an indexing data structure (B-trees for MySQL) that has a log base much higher than 2, meaning that instead of an extra step every time the table doubles, it's more like an extra step every time the table size goes up by a factor of hundreds to thousands. This means that instead of the 20-30 steps I stated in the example, it's more like 2-5.
If I SELECT IDs then UPDATE using those IDs, then the UPDATE query is faster than if I would UPDATE using the conditions in the SELECT.
To illustrate:
SELECT id FROM table WHERE a IS NULL LIMIT 10; -- 0.00 sec
UPDATE table SET field = value WHERE id IN (...); -- 0.01 sec
The above is about 100 times faster than an UPDATE with the same conditions:
UPDATE table SET field = value WHERE a IS NULL LIMIT 10; -- 0.91 sec
Why?
Note: the a column is indexed.
Most likely the second UPDATE statement locks much more rows, while the first one uses unique key and locks only the rows it's going to update.
The two queries are not identical. You only know that the IDs are unique in the table.
UPDATE ... LIMIT 10 will update at most 10 records.
UPDATE ... WHERE id IN (SELECT ... LIMIT 10) may update more than 10 records if there are duplicate ids.
I don't think there can be a one straight-forward answer to your "why?" without doing some sort of analysis and research.
The SELECT queries are normally cached, which means that if you run the same SELECT query multiple times, the execution time of the first query is normally greater than the following queries. Please note that this behavior can only be experienced where the SELECT is heavy and not in scenarios where even the first SELECT is much faster. So, in your example it might be that the SELECT took 0.00s because of the caching. The UPDATE queries are using different WHERE clauses and hence it is likely that their execution times are different.
Though the column a is indexed, but it is not necessary that MySQL must be using the index when doing the SELECT or the UPDATE. Please study the EXPLAIN outputs. Also, see the output of SHOW INDEX and check if the "Comment" column reads "disabled" for any indexes? You may read more here - http://dev.mysql.com/doc/refman/5.0/en/show-index.html and http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html.
Also, if we ignore the SELECT for a while and focus only on the UPDATE queries, it is obvious that they aren't both using the same WHERE condition - the first one runs on id column and the latter on a. Though both columns are indexed but it does not necessarily mean that all the table indexes perform alike. It is possible that some index is more efficient than the other depending on the size of the index or the datatype of the indexed column or if it is a single- or multiple-column index. There sure might be other reasons but I ain't an expert on it.
Also, I think that the second UPDATE is doing more work in the sense that it might be putting more row-level locks compared to the first UPDATE. It is true that both UPDATES are finally updating the same number of rows. But where in the first update, it is 10 rows that are locked, I think in the second UPDATE, all rows with a as NULL (which is more than 10) are locked before doing the UPDATE. Perhaps MySQL first applies the locking and then runs the LIMIT clause to update only limited records.
Hope the above explanation makes sense!
Do you have a composite index or separate indexes?
If it is a composite index of id and a columns,
In 2nd update statement the a column's index would not be used. The reason is that only the left most prefix indexes are used (unless if a is the PRIMARY KEY)
So if you want the a column's index to be used, you need in include id in your WHERE clause as well, with id first then a.
Also it depends on what storage engine you are using since MySQL does indexes at the engine level, not server.
You can try this:
UPDATE table SET field = value WHERE id IN (...) AND a IS NULL LIMIT 10;
By doing this id is in the left most index followed by a
Also from your comments, the lookups are much faster because if you are using InnoDB, updating columns would mean that the InnoDB storage engine would have to move indexes to a different page node, or have to split a page if the page is already full, since InnoDB stores indexes in sequential order. This process is VERY slow and expensive, and gets even slower if your indexes are fragmented, or if your table is very big
The comment by Michael J.V is the best description. This answer assumes a is a column that is not indexed and 'id' is.
The WHERE clause in the first UPDATE command is working off the primary key of the table, id
The WHERE clause in the second UPDATE command is working off a non-indexed column. This makes the finding of the columns to be updated significantly slower.
Never underestimate the power of indexes. A table will perform better if the indexes are used correctly than a table a tenth the size with no indexing.
Regarding "MySQL doesn't support updating the same table you're selecting from"
UPDATE table SET field = value
WHERE id IN (SELECT id FROM table WHERE a IS NULL LIMIT 10);
Just do this:
UPDATE table SET field = value
WHERE id IN (select id from (SELECT id FROM table WHERE a IS NULL LIMIT 10));
The accepted answer seems right but is incomplete, there are major differences.
As much as I understand, and I'm not a SQL expert:
The first query you SELECT N rows and UPDATE them using the primary key.
That's very fast as you have a direct access to all rows based on the fastest possible index.
The second query you UPDATE N rows using LIMIT
That will lock all rows and release again after the update is finished.
The big difference is that you have a RACE CONDITION in case 1) and an atomic UPDATE in case 2)
If you have two or more simultanous calls of the case 1) query you'll have the situation that you select the SAME id's from the table.
Both calls will update the same IDs simultanously, overwriting each other.
This is called "race condition".
The second case is avoiding that issue, mysql will lock all rows during the update.
If a second session is doing the same command it will have a wait time until the rows are unlocked.
So no race condition is possible at the expense of lost time.