Very different execution times for Update on different MySQL systems - mysql

I have a very strange situation with MySQL.
Take the following example:
A table containing 15000 entries
CREATE TABLE `temp_geonis_export` (
`auto_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
`obj_id` VARCHAR(10),
`gis_id` VARCHAR(45),
`letzteReinigung_id` VARCHAR(10),
INDEX `Index_2`(`gis_id`),
PRIMARY KEY (`auto_id`)
)
ENGINE = InnoDB;
Now I'm updating 14000 of the rows in the table. I know, its a pretty ugly statement that could easily be rewritten, but that is not the question...
Update temp_geonis_export as temp
inner join (
Select gis_id, obj_id from
(
Select abw.gis_id, abw.bezeichnung, erh.obj_id
from od_abwasserbauwerk as abw
inner join od_erhaltungsereignis as erh on erh.fs_abwasserbauwerk = abw.obj_id and erh.status = 2
inner join od_reinigung as unter on unter.obj_id = erh.obj_id
order by fs_abwasserbauwerk asc, erh.zeitpunkt asc
) as alleSortiert group by alleSortiert.gis_id
) as naechsteRein on temp.gis_id = naechsteRein.gis_id
set temp.naechsteReinigung_id = naechsteRein.obj_id;
Now, if I run the Update-statement on our development server, it takes about 1 sec. On one of our production servers it takes 90 seconds!!
These are my observations:
Handler_read_rnd_next 101000 (development), 266177000 (production)
Very high CPU Usage on production system (due to the above observation)
Almost no Disk IO on both Systems
When I rewrite the Update-Query and store the output of the subquery into a temporary table, the Update-Statement is fast on both systems
Due to the observations my conclusion is, that for some reason, our production server has to perform full table scans for each updated row. The development server does not. It must be a configuration issue, since our Servers are all 5.1.25 and the hardware is comparable.
Do you have a clue, what I have to change on our production server to make it perform better?
Thanks for your help

After hours I finally got the solution:
The problem was, that the DEFAULT CHARSET on the production Server was different from the charset used in the database. Now, when creating the table without explicitly specifying the charset, MySQL did not use indexes. This is because MySQL can't use indexes on CHAR-Fields when the charsets of the joined tables differ.
Thank you so much for your help peterm. You pointed me into the right direction.

Related

Fastest way to remove a HUGE set of row keys from a table via primary key? [duplicate]

I have two tables. Let's call them KEY and VALUE.
KEY is small, somewhere around 1.000.000 records.
VALUE is huge, say 1.000.000.000 records.
Between them there is a connection such that each KEY might have many VALUES. It's not a foreign key but basically the same meaning.
The DDL looks like this
create table KEY (
key_id int,
primary key (key_id)
);
create table VALUE (
key_id int,
value_id int,
primary key (key_id, value_id)
);
Now, my problem. About half of all key_ids in VALUE have been deleted from KEY and I need to delete them in a orderly fashion while both tables are still under high load.
It would be easy to do
delete v
from VALUE v
left join KEY k using (key_id)
where k.key_id is null;
However, as it's not allowed to have a limit on multi table delete I don't like this approach. Such a delete would take hours to run and that makes it impossible to throttle the deletes.
Another approach is to create cursor to find all missing key_ids and delete them one by one with a limit. That seems very slow and kind of backwards.
Are there any other options? Some nice tricks that could help?
Any solution that tries to delete so much data in one transaction is going to overwhelm the rollback segment and cause a lot of performance problems.
A good tool to help is pt-archiver. It performs incremental operations on moderate-sized batches of rows, as efficiently as possible. pt-archiver can copy, move, or delete rows depending on options.
The documentation includes an example of deleting orphaned rows, which is exactly your scenario:
pt-archiver --source h=host,D=db,t=VALUE --purge \
--where 'NOT EXISTS(SELECT * FROM `KEY` WHERE key_id=`VALUE`.key_id)' \
--limit 1000 --commit-each
Executing this will take significantly longer to delete the data, but it won't use too many resources, and without interrupting service on your existing database. I have used it successfully to purge hundreds of millions of rows of outdated data.
pt-archiver is part of the Percona Toolkit for MySQL, a free (GPL) set of scripts that help common tasks with MySQL and compatible databases.
Directly from MySQL documentation
If you are deleting many rows from a large table, you may exceed the
lock table size for an InnoDB table. To avoid this problem, or simply
to minimize the time that the table remains locked, the following
strategy (which does not use DELETE at all) might be helpful:
Select the rows not to be deleted into an empty table that has the same structure as the original table:
INSERT INTO t_copy SELECT * FROM t WHERE ... ;
Use RENAME TABLE to atomically move the original table out of the way and rename the copy to the original name:
RENAME TABLE t TO t_old, t_copy TO t;
Drop the original table:
DROP TABLE t_old;
No other sessions can access the tables involved while RENAME TABLE
executes, so the rename operation is not subject to concurrency
problems. See Section 12.1.9, “RENAME TABLE Syntax”.
So in Your case You may do
INSERT INTO value_copy SELECT * FROM VALUE WHERE key_id IN
(SELECT key_id FROM `KEY`);
RENAME TABLE value TO value_old, value_copy TO value;
DROP TABLE value_old;
And according to what they wrote here RENAME operation is quick and number of records doesn't affect it.
What about this for having a limit?
delete x
from `VALUE` x
join (select key_id, value_id
from `VALUE` v
left join `KEY` k using (key_id)
where k.key_id is null
limit 1000) y
on x.key_id = y.key_id AND x.value_id = y.value_id;
First, examine your data. Find the keys which have too many values to be deleted "fast". Then find out which times during the day you have the smallest load on the system. Perform the deletion of the "bad" keys during that time. For the rest, start deleting them one by one with some downtime between deletes so that you don't put to much pressure on the database while you do it.
May be instead of limit divide whole set of rows into small parts by key_id:
delete v
from VALUE v
left join KEY k using (key_id)
where k.key_id is null and v.key_id > 0 and v.key_id < 100000;
then delete rows with key_id in 100000..200000 and so on.
You can try to delete in separated transaction batches.
This is for MSSQL, but should be similar.
declare #i INT
declare #step INT
set #i = 0
set #step = 100000
while (#i< (select max(VALUE.key_id) from VALUE))
BEGIN
BEGIN TRANSACTION
delete from VALUE where
VALUE.key_id between #i and #i+#step and
not exists(select 1 from KEY where KEY.key_id = VALUE.key_id and KEY.key_id between #i and #i+#step)
set #i = (#i+#step)
COMMIT TRANSACTION
END
Create a temporary table!
drop table if exists batch_to_delete;
create temporary table batch_to_delete as
select v.* from `VALUE` v
left join `KEY` k on k.key_id = v.key_id
where k.key_id is null
limit 10000; -- tailor batch size to your taste
-- optional but may help for large batch size
create index batch_to_delete_ix_key on batch_to_delete(key_id);
create index batch_to_delete_ix_value on batch_to_delete(value_id);
-- do the actual delete
delete v from `VALUE` v
join batch_to_delete d on d.key_id = v.key_id and d.value_id = v.value_id;
To me this is a kind of task the progress of which I would want to see in a log file. And I would avoid solving this in pure SQL, I would use some scripting in Python or other similar language. Another thing that would bother me is that lots of LEFT JOINs with WHERE IS NOT NULL between the tables might cause unwanted locks, so I would avoid JOINs either.
Here is some pseudo code:
max_key = select_db('SELECT MAX(key) FROM VALUE')
while max_key > 0:
cur_range = range(max_key, max_key-100, -1)
good_keys = select_db('SELECT key FROM KEY WHERE key IN (%s)' % cur_range)
keys_to_del = set(cur_range) - set(good_keys)
while 1:
deleted_count = update_db('DELETE FROM VALUE WHERE key IN (%s) LIMIT 1000' % keys_to_del)
db_commit
log_something
if not deleted_count:
break
max_key -= 100
This should not bother the rest of the system very much, but may take long. Another issue is to optimize the table after you deleted all those rows, but this is another story.
If the target columns are properly indexed this should go fast,
DELETE FROM `VALUE`
WHERE NOT EXISTS(SELECT 1 FROM `key` k WHERE k.key_id = `VALUE`.key_id)
-- ORDER BY key_id, value_id -- order by PK is good idea, but check the performance first.
LIMIT 1000
Alter the limit from 10 to 10000 to get acceptable performance, and rerun it several times.
Also take in mind that this mass deletes will perform locks and backups for each row ..
multiple the execution time for each row several times ...
There are some advanced methods to prevent this, but the easiest workaround
is just to put a transaction around this query.
Do you have SLAVE or Dev/Test environment with same data?
The first step is to find out your data distribution if you are worried about a particular key having 1 million value_ids
SELECT v.key_id, COUNT(IFNULL(k.key_id,1)) AS cnt
FROM `value` v LEFT JOIN `key` k USING (key_id)
WHERE k.key_id IS NULL
GROUP BY v.key_id ;
EXPLAIN PLAN for above query is much better than adding
ORDER BY COUNT(IFNULL(k.key_id,1)) DESC ;
Since you don't have partitioning on key_id (too many partitions in your case) and want to keep database running during your delete process, the option is to delete in chucks with SLEEP() between different key_id deletes to avoid overwhelming server. Don't forget to keep an eye on your binary logs to avoid disk filling.
The quickest way is :
Stop application so data is not changed.
Dump key_id and value_id from VALUE table with only matching key_id in KEY table by using
mysqldump YOUR_DATABASE_NAME value --where="key_id in (select key_id from YOUR_DATABASE_NAME.key)" --lock-all --opt --quick --quote-names --skip-extended-insert > VALUE_DATA.txt
Truncate VALUE table
Load data exported in step 2
Start Application
As always, try this in Dev/Test environment with Prod data and same infrastructure so you can calculate downtime.
Hope this helps.
I am just curious what the effect would be of adding a non-unique index on key_id in table VALUE. Selectivity is not high at all (~0.001) but I am curious how that would affect the join performance.
Why don't you split your VALUE table into several ones according to some rule like key_id module some power of 2 (like 256 for example)?

JOIN Three tables

I have this database "stats19" with all data from UK accidents from 2005 to 2013
Now I have to create a DataWarehouse (star type).
This are the tables i'm trying to join in one table, omitting some other variables that are not important
stats19.casualty (2,020,000 rows)
AccidentIndex varchar(13)
VehicleReference int(11)
CasualtyReference_id int(11)
CasualtyClass int(11)
CasualtySeverity varchar(7)
CasualtySex varchar(28)
CasualtyAgeBand varchar(7)
...
stats19.typeperson (2,020,000 rows)
CasualtyType_id int(11)
fk_AccidentIndex varchar(13)
fk_VehicleReference int(11)
fk_CasualtyReference_id int(11)
...
stats19.accident (1,494,275 rows)
AccidentIndex varchar(13)
AccidentDate date
AccidentTime time
...
Final table must have this variables
dw.casualtytemporary (should have 202.000 rows)
AccidentIndex VARCHAR(255),
VehicleReference INT,
CasualtyReference INT,
CasualtyClass INT,
CasualtyType INT,
AccidentDate DATE,
AccidentTime TIME,
CasualtySex VARCHAR(255),
CasualtyAgeBand VARCHAR(255)
I have been trying to execute this to insert
INSERT INTO CasualtyTemp
(SELECT c.AccidentIndex,c.VehicleReference,c.CasualtyReference_id,
c.CasualtyClass,t.CasualtyType_id,a.AccidentDate,a.AccidentTime,
c.CasualtySex, c.CasualtyAgeBand
FROM (stats19.Casualty as c
INNER JOIN stats19.typeperson as t
ON c.CasualtyReference_id = t.cf_CasualtyReference_id
INNER JOIN stats19.accident as a
ON a.AccidentIndex = c.AccidentIndex))
);
The problem comes when MYSQL CommandLine or Workbench both fail inserting by getting an error (disconnection) or taking too much time to do the insert.
Final table dw.casualtytemporary should have 2,020,000 rows as this is what the original table had.
Since you are doing full table join without 'where', so I think the search complexity is n1*log(n2)*log(n3) where ni is the row number of each table(if you use index on the inner join field).
I think your SQL statement is right, and mysql optimizer will further optimize the SQL, so I think no need to do on the SQL. But I think you can tune the MYSQL part, I list something that maybe important.
Both storage engine should be same, this can ensure that the tables are join in engine level, else they will join on server level which is slow.
If you use Innodb, maybe you can tune the important parameters related to Inoodb, like 'Innodb_buffer_pool_size', because enough space will make innodb do hash index in memory.
If you use Myisam engine, maybe you can tune the myisam index size to ensure the index can load into memory.
Also, since you will produce derived table, so tmp_table_size will be important, if tmp_table_size is small, the myisam table will be used as tmp table. Also note innodb is very slow to write due to the double write log mechanism, and this will be further worse when you are using insert...select, since concurrent insert is disable.
Other factors like if there are NULL in your field, if the field repeat heavily and if so, you can use ENUM which is faster than VARCHAR. Also notice CHAR is about 20% faster than VARCHAR, in case disk space is not a concern and string is short maybe can try.
If all of above can't resolve your problem or you don't have a big machine, since you are doing only three table, you can program some code in c/c++, which is the most efficient way.

MySQL Left Join Query hung for 4 hours

Is it excpetable, or normal to wait 3 hours, 50 minutes and 39.70 seconds to run the below query?
CREATE TABLE ukbm001marketing.CampaignHistory_v2
(
AddressId int,
CampaignId int,
CampaignTypeId int,
OpenUserId nvarchar(255),
OpenDate datetime,
CloseUserId nvarchar(255),
CloseDate datetime
);
INSERT INTO ukbm001marketing.CampaignHistory_v2
SELECT a.ContactId,
b.CampaignCodeId,
c.CampaignId,
'mballinger',
now(),
NULL,
NULL -- SELECT *
FROM
ukbm001marketing.temp_ContactHistory_grtthn2009_raw a
LEFT JOIN
ukbm001marketing.temp_CampaignCode_raw b ON a.CampaignCode = b.CampaignCode
AND a.ContactDate = b.ContactDate
AND a.Load_Date = b.Load_Date
LEFT JOIN
ukbm001marketing.temp_ContactCodes_raw c ON a.ContactCode = c.ContactCode;
Tables used in the query:
ukbm001marketing.temp_ContactHistory_grtthn2009_raw has 565,832 rows
ukbm001marketing.temp_CampaignCode_raw has 9505 rows
ukbm001marketing.temp_ContactCodes_raw has 39 rows
I ran this query in the command prompt line.
In the past I have worked with Microsoft SQL Server (set up by my IS Department). I am working on a project of my own and I have the following set up:
Installed Using EasyPHP12.1
Apache/2.4.2 (Win32) PHP/5.4.6
Software version: 5.5.27-log - MySQL Community Server (GPL)
Laptop Spec
Processor: Intel(R) Core(TM) i7-2620M CPU # 2.70ghz 2.70ghz
RAM: 8.00 GB (7.88 GB usabale)
System 64bit Windows System
The system was running at 50% CPU usage.
I have not indexed any of the tables. I have not given the tables any primary keys. Does this issue relate to my system performance? Is it a database design issue? Or does it a setting on the mysql server?
Many thanks in advance for your help.
This is definitely an index issue. You are joining a 565.832 rows counting table on 3 fields to a 9505 table without using any indexes. This will give you a full table scan on both tables, meaning that the server will indeed have to get all of those 565832 from disk (-> slooow) and match them in memory.
Given the limited info you gave, I don't think you have done any MySQL optimalisation meaning that your join_buffer_size will be rather small. This will result in even more table scans, as MySQL will not be able to store everything in the buffer. So your 8Gb of ram will not do you any good if you do not instruct MySQL to actually use it.
So basically, create a multi-colum index on the campaigncode, contactdate and load_date fields on both the 'a' and 'b' table, and add an index on the contactcode field of the 'a' and 'c' table.
Depending on the table structure (definitely when the b or c table contain a lot of other fields that are not listed here), you might even considder adding the campaigncodeid field to the index on the 'b' table and the campaignid to the index on the 'c' table. This way, MySQL will be able to use the index to retrieve all the data, and will not need to access the actual data table to retrieve the two fields. Obviously, the penalty is that your index will be larger. Combined with some MySQL tuning, you could keep the indexes in memory, speeding the whole thing up even more.

multiple temporary tables?

This might be a basic question: I am using a temporary table in some of my php code like so:
CREATE TEMPORARY TABLE ttable( `d` DATE NOT NULL , `p` DECIMAL( 11, 2 ) NOT NULL , UNIQUE KEY `date` ( `date` ) );
INSERT INTO ttable( d, p ) VALUES ( '$d' , '$p' );
SELECT * FROM ttable;
As we scale up our site, will this ever be a problem? ie, will user1's ttable & user2's ttable ever get mixed up & user1 sees user2's ttable & vice versa? Is it better to create a unique name for each unique temporary table?
thx
Temporary tables are session-specific. Every time you connect to a host (in PHP, this is done with mysql_connect), temporary tables that you create exist only within that session/connection.
It is almost always better to find a different way than using temporary tables.
The only time I would consider them is under the following conditions:
The activity is rare. Meaning, a given user MIGHT do this once a week.
It is used as a holding container prior to doing a regular full import of data.
It deals with data whose structure is unknown prior to being filled.
All three of those really go with building some type of generic bulk import routines where the data mapping is defined at run time.
If you find yourself creating temp tables frequently in the application, there's probably a better way.
Scalability is going to depend on the amount of data being loaded and frequency of temp table usage. For a low trafficked site it might be okay.
We're in the process of ripping out a ton of temp table usage by a client's app. 90% of the queries in their system result in a temp table being created. Analysis of all the queries have shown that the original dev used this mechanism simply because they didn't understand SQL. We're doing this because performance has radically dropped off as new users are added to the system.
Can you post a use case? Maybe we can help provide an alternate mechanism.
UPDATE:
Now that we have a use case, here is a simple table structure to accomplish what you need.
Table ZipCodes
ZipCode char(5) [or char(10) depending on need]
CityName varchar(50)
*other columns as necessary such as latitude or whatever.
Table TempReadings
ZipCode char(5) [foreign key to the ZipCode table]
ReadingDate datetime
Temperature float (or some equivalent)
To get all the temp readings for a given zip code you would do something like:
select ZipCode, ReadingDate, Temperature
from TempReadings
if you need info from the main ZipCode table:
select Z.ZipCode, Z.CityName, TR.ReadingDate, TR.Temperature
from ZipCodes Z
inner join TempReadings TR on (TR.ZipCode = Z.ZipCode)
add where clauses as necessary. Note that none of the above requires having a separate table per zip code.

Faster way to delete matching rows?

I'm a relative novice when it comes to databases. We are using MySQL and I'm currently trying to speed up a SQL statement that seems to take a while to run. I looked around on SO for a similar question but didn't find one.
The goal is to remove all the rows in table A that have a matching id in table B.
I'm currently doing the following:
DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE b.id = a.id);
There are approximately 100K rows in table a and about 22K rows in table b. The column 'id' is the PK for both tables.
This statement takes about 3 minutes to run on my test box - Pentium D, XP SP3, 2GB ram, MySQL 5.0.67. This seems slow to me. Maybe it isn't, but I was hoping to speed things up. Is there a better/faster way to accomplish this?
EDIT:
Some additional information that might be helpful. Tables A and B have the same structure as I've done the following to create table B:
CREATE TABLE b LIKE a;
Table a (and thus table b) has a few indexes to help speed up queries that are made against it. Again, I'm a relative novice at DB work and still learning. I don't know how much of an effect, if any, this has on things. I assume that it does have an effect as the indexes have to be cleaned up too, right? I was also wondering if there were any other DB settings that might affect the speed.
Also, I'm using INNO DB.
Here is some additional info that might be helpful to you.
Table A has a structure similar to this (I've sanitized this a bit):
DROP TABLE IF EXISTS `frobozz`.`a`;
CREATE TABLE `frobozz`.`a` (
`id` bigint(20) unsigned NOT NULL auto_increment,
`fk_g` varchar(30) NOT NULL,
`h` int(10) unsigned default NULL,
`i` longtext,
`j` bigint(20) NOT NULL,
`k` bigint(20) default NULL,
`l` varchar(45) NOT NULL,
`m` int(10) unsigned default NULL,
`n` varchar(20) default NULL,
`o` bigint(20) NOT NULL,
`p` tinyint(1) NOT NULL,
PRIMARY KEY USING BTREE (`id`),
KEY `idx_l` (`l`),
KEY `idx_h` USING BTREE (`h`),
KEY `idx_m` USING BTREE (`m`),
KEY `idx_fk_g` USING BTREE (`fk_g`),
KEY `fk_g_frobozz` (`id`,`fk_g`),
CONSTRAINT `fk_g_frobozz` FOREIGN KEY (`fk_g`) REFERENCES `frotz` (`g`)
) ENGINE=InnoDB AUTO_INCREMENT=179369 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;
I suspect that part of the issue is there are a number of indexes for this table.
Table B looks similar to table B, though it only contains the columns id and h.
Also, the profiling results are as follows:
starting 0.000018
checking query cache for query 0.000044
checking permissions 0.000005
Opening tables 0.000009
init 0.000019
optimizing 0.000004
executing 0.000043
end 0.000005
end 0.000002
query end 0.000003
freeing items 0.000007
logging slow query 0.000002
cleaning up 0.000002
SOLVED
Thanks to all the responses and comments. They certainly got me to think about the problem. Kudos to dotjoe for getting me to step away from the problem by asking the simple question "Do any other tables reference a.id?"
The problem was that there was a DELETE TRIGGER on table A which called a stored procedure to update two other tables, C and D. Table C had a FK back to a.id and after doing some stuff related to that id in the stored procedure, it had the statement,
DELETE FROM c WHERE c.id = theId;
I looked into the EXPLAIN statement and rewrote this as,
EXPLAIN SELECT * FROM c WHERE c.other_id = 12345;
So, I could see what this was doing and it gave me the following info:
id 1
select_type SIMPLE
table c
type ALL
possible_keys NULL
key NULL
key_len NULL
ref NULL
rows 2633
Extra using where
This told me that it was a painful operation to make and since it was going to get called 22500 times (for the given set of data being deleted), that was the problem. Once I created an INDEX on that other_id column and reran the EXPLAIN, I got:
id 1
select_type SIMPLE
table c
type ref
possible_keys Index_1
key Index_1
key_len 8
ref const
rows 1
Extra
Much better, in fact really great.
I added that Index_1 and my delete times are in line with the times reported by mattkemp. This was a really subtle error on my part due to shoe-horning some additional functionality at the last minute. It turned out that most of the suggested alternative DELETE/SELECT statements, as Daniel stated, ended up taking essentially the same amount of time and as soulmerge mentioned, the statement was pretty much the best I was going to be able to construct based on what I needed to do. Once I provided an index for this other table C, my DELETEs were fast.
Postmortem:
Two lessons learned came out of this exercise. First, it is clear that I didn't leverage the power of the EXPLAIN statement to get a better idea of the impact of my SQL queries. That's a rookie mistake, so I'm not going to beat myself up about that one. I'll learn from that mistake. Second, the offending code was the result of a 'get it done quick' mentality and inadequate design/testing led to this problem not showing up sooner. Had I generated several sizable test data sets to use as test input for this new functionality, I'd have not wasted my time nor yours. My testing on the DB side was lacking the depth that my application side has in place. Now I've got the opportunity to improve that.
Reference: EXPLAIN Statement
Deleting data from InnoDB is the most expensive operation you can request of it. As you already discovered the query itself is not the problem - most of them will be optimized to the same execution plan anyway.
While it may be hard to understand why DELETEs of all cases are the slowest, there is a rather simple explanation. InnoDB is a transactional storage engine. That means that if your query was aborted halfway-through, all records would still be in place as if nothing happened. Once it is complete, all will be gone in the same instant. During the DELETE other clients connecting to the server will see the records until your DELETE completes.
To achieve this, InnoDB uses a technique called MVCC (Multi Version Concurrency Control). What it basically does is to give each connection a snapshot view of the whole database as it was when the first statement of the transaction started. To achieve this, every record in InnoDB internally can have multiple values - one for each snapshot. This is also why COUNTing on InnoDB takes some time - it depends on the snapshot state you see at that time.
For your DELETE transaction, each and every record that is identified according to your query conditions, gets marked for deletion. As other clients might be accessing the data at the same time, it cannot immediately remove them from the table, because they have to see their respective snapshot to guarantee the atomicity of the deletion.
Once all records have been marked for deletion, the transaction is successfully committed. And even then they cannot be immediately removed from the actual data pages, before all other transactions that worked with a snapshot value before your DELETE transaction, have ended as well.
So in fact your 3 minutes are not really that slow, considering the fact that all records have to be modified in order to prepare them for removal in a transaction safe way. Probably you will "hear" your hard disk working while the statement runs. This is caused by accessing all the rows.
To improve performance you can try to increase InnoDB buffer pool size for your server and try to limit other access to the database while you DELETE, thereby also reducing the number of historic versions InnoDB has to maintain per record.
With the additional memory InnoDB might be able to read your table (mostly) into memory and avoid some disk seeking time.
Try this:
DELETE a
FROM a
INNER JOIN b
on a.id = b.id
Using subqueries tend to be slower then joins as they are run for each record in the outer query.
This is what I always do, when I have to operate with super large data (here: a sample test table with 150000 rows):
drop table if exists employees_bak;
create table employees_bak like employees;
insert into employees_bak
select * from employees
where emp_no > 100000;
rename table employees to employees_todelete;
rename table employees_bak to employees;
drop table employees_todelete;
In this case the sql filters 50000 rows into the backup table.
The query cascade performs on my slow machine in 5 seconds.
You can replace the insert into select by your own filter query.
That is the trick to perform mass deletion on big databases!;=)
Your time of three minutes seems really slow. My guess is that the id column is not being indexed properly. If you could provide the exact table definition you're using that would be helpful.
I created a simple python script to produce test data and ran multiple different versions of the delete query against the same data set. Here's my table definitions:
drop table if exists a;
create table a
(id bigint unsigned not null primary key,
data varchar(255) not null) engine=InnoDB;
drop table if exists b;
create table b like a;
I then inserted 100k rows into a and 25k rows into b (22.5k of which were also in a). Here's the results of the various delete commands. I dropped and repopulated the table between runs by the way.
mysql> DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE a.id=b.id);
Query OK, 22500 rows affected (1.14 sec)
mysql> DELETE FROM a USING a LEFT JOIN b ON a.id=b.id WHERE b.id IS NOT NULL;
Query OK, 22500 rows affected (0.81 sec)
mysql> DELETE a FROM a INNER JOIN b on a.id=b.id;
Query OK, 22500 rows affected (0.97 sec)
mysql> DELETE QUICK a.* FROM a,b WHERE a.id=b.id;
Query OK, 22500 rows affected (0.81 sec)
All the tests were run on an Intel Core2 quad-core 2.5GHz, 2GB RAM with Ubuntu 8.10 and MySQL 5.0. Note, that the execution of one sql statement is still single threaded.
Update:
I updated my tests to use itsmatt's schema. I slightly modified it by remove auto increment (I'm generating synthetic data) and character set encoding (wasn't working - didn't dig into it).
Here's my new table definitions:
drop table if exists a;
drop table if exists b;
drop table if exists c;
create table c (id varchar(30) not null primary key) engine=InnoDB;
create table a (
id bigint(20) unsigned not null primary key,
c_id varchar(30) not null,
h int(10) unsigned default null,
i longtext,
j bigint(20) not null,
k bigint(20) default null,
l varchar(45) not null,
m int(10) unsigned default null,
n varchar(20) default null,
o bigint(20) not null,
p tinyint(1) not null,
key l_idx (l),
key h_idx (h),
key m_idx (m),
key c_id_idx (id, c_id),
key c_id_fk (c_id),
constraint c_id_fk foreign key (c_id) references c(id)
) engine=InnoDB row_format=dynamic;
create table b like a;
I then reran the same tests with 100k rows in a and 25k rows in b (and repopulating between runs).
mysql> DELETE FROM a WHERE EXISTS (SELECT b.id FROM b WHERE a.id=b.id);
Query OK, 22500 rows affected (11.90 sec)
mysql> DELETE FROM a USING a LEFT JOIN b ON a.id=b.id WHERE b.id IS NOT NULL;
Query OK, 22500 rows affected (11.48 sec)
mysql> DELETE a FROM a INNER JOIN b on a.id=b.id;
Query OK, 22500 rows affected (12.21 sec)
mysql> DELETE QUICK a.* FROM a,b WHERE a.id=b.id;
Query OK, 22500 rows affected (12.33 sec)
As you can see this is quite a bit slower than before, probably due to the multiple indexes. However, it is nowhere near the three minute mark.
Something else that you might want to look at is moving the longtext field to the end of the schema. I seem to remember that mySQL performs better if all the size restricted fields are first and text, blob, etc are at the end.
You're doing your subquery on 'b' for every row in 'a'.
Try:
DELETE FROM a USING a LEFT JOIN b ON a.id = b.id WHERE b.id IS NOT NULL;
Try this out:
DELETE QUICK A.* FROM A,B WHERE A.ID=B.ID
It is much faster than normal queries.
Refer for Syntax : http://dev.mysql.com/doc/refman/5.0/en/delete.html
I know this question has been pretty much solved due to OP's indexing omissions but I would like to offer this additional advice, which is valid for a more generic case of this problem.
I have personally dealt with having to delete many rows from one table that exist in another and in my experience it's best to do the following, especially if you expect lots of rows to be deleted. This technique most importantly will improve replication slave lag, as the longer each single mutator query runs, the worse the lag would be (replication is single threaded).
So, here it is: do a SELECT first, as a separate query, remembering the IDs returned in your script/application, then continue on deleting in batches (say, 50,000 rows at a time).
This will achieve the following:
each one of the delete statements will not lock the table for too long, thus not letting replication lag to get out of control. It is especially important if you rely on your replication to provide you relatively up-to-date data. The benefit of using batches is that if you find that each DELETE query still takes too long, you can adjust it to be smaller without touching any DB structures.
another benefit of using a separate SELECT is that the SELECT itself might take a long time to run, especially if it can't for whatever reason use the best DB indexes. If the SELECT is inner to a DELETE, when the whole statement migrates to the slaves, it will have to do the SELECT all over again, potentially lagging the slaves because it has to do the long select all over again. Slave lag, again, suffers badly. If you use a separate SELECT query, this problem goes away, as all you're passing is a list of IDs.
Let me know if there's a fault in my logic somewhere.
For more discussion on replication lag and ways to fight it, similar to this one, see MySQL Slave Lag (Delay) Explained And 7 Ways To Battle It
P.S. One thing to be careful about is, of course, potential edits to the table between the times the SELECT finishes and DELETEs start. I will let you handle such details by using transactions and/or logic pertinent to your application.
DELETE FROM a WHERE id IN (SELECT id FROM b)
Maybe you should rebuild the indicies before running such a hugh query. Well, you should rebuild them periodically.
REPAIR TABLE a QUICK;
REPAIR TABLE b QUICK;
and then run any of the above queries (i.e.)
DELETE FROM a WHERE id IN (SELECT id FROM b)
The query itself is already in an optimal form, updating the indexes causes the whole operation to take that long. You could disable the keys on that table before the operation, that should speed things up. You can turn them back on at a later time, if you don't need them immediately.
Another approach would be adding a deleted flag-column to your table and adjusting other queries so they take that value into account. The fastest boolean type in mysql is CHAR(0) NULL (true = '', false = NULL). That would be a fast operation, you can delete the values afterwards.
The same thoughts expressed in sql statements:
ALTER TABLE a ADD COLUMN deleted CHAR(0) NULL DEFAULT NULL;
-- The following query should be faster than the delete statement:
UPDATE a INNER JOIN b SET a.deleted = '';
-- This is the catch, you need to alter the rest
-- of your queries to take the new column into account:
SELECT * FROM a WHERE deleted IS NULL;
-- You can then issue the following queries in a cronjob
-- to clean up the tables:
DELETE FROM a WHERE deleted IS NOT NULL;
If that, too, is not what you want, you can have a look at what the mysql docs have to say about the speed of delete statements.
BTW, after posting the above on my blog, Baron Schwartz from Percona brought to my attention that his maatkit already has a tool just for this purpose - mk-archiver. http://www.maatkit.org/doc/mk-archiver.html.
It is most likely your best tool for the job.
Obviously the SELECT query that builds the foundation of your DELETE operation is quite fast so I'd think that either the foreign key constraint or the indexes are the reasons for your extremely slow query.
Try
SET foreign_key_checks = 0;
/* ... your query ... */
SET foreign_key_checks = 1;
This would disable the checks on the foreign key. Unfortunately you cannot disable (at least I don't know how) the key-updates with an InnoDB table. With a MyISAM table you could do something like
ALTER TABLE a DISABLE KEYS
/* ... your query ... */
ALTER TABLE a ENABLE KEYS
I actually did not test if these settings would affect the query duration. But it's worth a try.
Connect datebase using terminal and execute command below, look at the result time each of them, you'll find that times of delete 10, 100, 1000, 10000, 100000 records are not Multiplied.
DELETE FROM #{$table_name} WHERE id < 10;
DELETE FROM #{$table_name} WHERE id < 100;
DELETE FROM #{$table_name} WHERE id < 1000;
DELETE FROM #{$table_name} WHERE id < 10000;
DELETE FROM #{$table_name} WHERE id < 100000;
The time of deleting 10 thousand records is not 10 times as much as deleting 100 thousand records.
Then, except for finding a way delete records more faster, there are some indirect methods.
1, We can rename the table_name to table_name_bak, and then select records from table_name_bak to table_name.
2, To delete 10000 records, we can delete 1000 records 10 times. There is an example ruby script to do it.
#!/usr/bin/env ruby
require 'mysql2'
$client = Mysql2::Client.new(
:as => :array,
:host => '10.0.0.250',
:username => 'mysql',
:password => '123456',
:database => 'test'
)
$ids = (1..1000000).to_a
$table_name = "test"
until $ids.empty?
ids = $ids.shift(1000).join(", ")
puts "delete =================="
$client.query("
DELETE FROM #{$table_name}
WHERE id IN ( #{ids} )
")
end
The basic technique for deleting multiple Row form MySQL in single table through the id field
DELETE FROM tbl_name WHERE id <= 100 AND id >=200;
This query is responsible for deleting the matched condition between 100 AND 200 from the certain table