Taking forever to update a table from another - mysql

I have temp table with 14k records and main table with 5 million records, I am updating main table from temp table using below SQL
UPDATE customPricing t1
INNER JOIN customPricingIncremental t2 ON (t1.customerClass=t2.customerClass and t1.customerName=t2.customerName and t1.svcType=t2.svcType and t1.svcDuration=t2.svcDuration and t1.durationPeriod=t2.durationPeriod and t1.partNumberSKU=t2.partNumberSKU)
SET t1.customerId= t2.customerId, t1.customerNumber= t2.customerNumber, t1.custPartNumber=t2.custPartNumber, t1.sppl= t2.sppl ,t1.priceMSRP= t2.priceMSRP, t1.partnerPriceDistiDvarOEM= t2.partnerPriceDistiDvarOEM, t1.msrpSvcPrice=t2.msrpSvcPrice, t1.partnerSvcPrice=t2.partnerSvcPrice, t1.msrpBundlePrice=t2.msrpBundlePrice, t1.partnerBundlePrice=t2.partnerBundlePrice, t1.startDate=t2.startDate, t1.endDate=t2.endDate, t1.currency=t2.currency, t1.countryCode=t2.countryCode, t1.inventoryItemId=t2.inventoryItemId, t1.flexField1=t2.flexField1, t1.flexField2=t2.flexField2, t1.flexField3=t2.flexField3, t1.flexField4=t2.flexField4, t1.flexField5=t2.flexField5
CustomerClass, customerName, durationPeriod, svcDuration & partNumberSKU all are indexes on both tables with length of 10 only, there is no primary key/unique indexes for both.
It takes forever to update the table, and I get timedout in the end.
What I am doing wrong ?
Nitesh

Try to disable nonunique keys temporarily:
ALTER TABLE customPricing DISABLE KEYS
Now run your query, and then enable them again:
ALTER TABLE customPricing ENABLE KEYS
Do this from the mysql client or a script instead of phpmyadmin as the queries might only apply to the current session.
Also, watch out for any triggers in the target table.
Read more about

Related

Better way of copying data?

I have two tables where I want to copy the post_id from one table to another when the testpostmeta.meta_value = testTable.stockcode
There's about 2000 rows in testTable and 65k rows in testpostmeta.
The code works, it just takes about 1-2 minutes to complete. Is there anything that can be done to speed the hamster wheel up?
UPDATE testTable
INNER JOIN testpostmeta
ON testTable.stockcode = testpostmeta.meta_value
SET testTable.post_id = testpostmeta.post_id
I tried adding WHERE testpostmeta.meta_value = testTable.stockcode but that didn't work.
be sure you have proper indexes on testTable and testpostmeta
CREATE INDEX my_idx1 ON testTable (stokcode);
CREATE INDEX my_idx2 ON testpostmeta (meta_value , post_id);
Try adding an index to each table that matches the field used for your JOIN criteria:
ALTER TABLE testTable ADD INDEX stockcode_idx(stockcode);
ALTER TABLE testpostmeta ADD INDEX meta_idx(meta_value);
You can stop the autocommit
SET autocommit = 0 ;
--Insert/Update/Delete stuff here
COMMIT ;
If post_id is indexed in target table that also can slow down the update.
Try disabling index before the operation and enable it after. So you data will be indexed once rather on each subsequent data change.
ALTER TABLE targetTable DISABLE KEYS;
-- Your UPDATE query
ALTER TABLE targetTable ENABLE KEYS;
And as said in the reference:
Performing multiple updates together is much quicker than doing one at a time if you lock the table.
Here some reference page that can give more idea on what can be done:
8.2.4.2 Optimizing UPDATE Statements
8.5.4 Bulk Data Loading for InnoDB Tables

Why am I getting `deleting from reference tables` even though I have disables foreign keys

So I am deleting records from a table by joining that table to another table.
I have disabled the foreign keys before I started running this statement.
So I have two tables A and B and I am deleting columns from table A using a join with column B to delete records that match on column id and one more criteria in where clause.
Here is the query
SET FOREIGN_KEY_CHECKS=0;
delete db.A from db.A join db.B USING(id) where name='xx';
SET FOREIGN_KEY_CHECKS=1;
Why do I still get the following 'State' in mysql process list
deleting from reference tables
Because you are "referencing a table" (db.A) in a multi table query (db.A and db.B).
If you don't specify a table on a "delete join", the query will not work, because mysql will not know which table you are trying to update.
Thereby you reference a table for delete.

MySQL : updating a table from another table by leftjoin vs iterating

I have two tables T1 and T2 and want to update one field of T1 from T2 where T2 holds massive data.
What is more efficient?
Updating T1 in a for loop iteration over the values
or
Left join it with T2 and update.
Please note that i'm updating these tables in a shell script
In general, the JOIN will always work much better than a loop. The size should not be an issue if it is properly indexed.
There is no simple answer which will be more effective, it will depend on table size and data size to which you are going to update in one go.
Suppose you are using innodb engine and trying to update 1,000 or more rows in one go with 2 heavy tables join and it is quite frequent then it will not be good idea on production server as it will lock your table for some time and due to this locking some other operations also can be hit on your production server.
Option1: If you are trying to update few rows and based on proper indexed fields (preferred based on primary key) then you can go with join.
Option2: If you are trying to update a large amount of data based on multiple tables join then below option will be better:
Step1: Create a stored procedure.
Step2: Keep below query results in a cursor.
suppose you want TO UPDATE corresponding field2 DATA of TABLE table2 IN field1 of TABLE table1:
SELECT a.primary_key,b.field2 FROM table1 a JOIN table2 b ON a.primary_key=b.foreign_key WHERE [place CONDITION here IF any...];
Step3: Now update all rows one by one based on primary key using stored values in cursor.
Step4: You can call this stored procedure from your script.

Mysql delete and optimize very slow

I searched Internet and Stack Overflow for my trouble, but couldn't find a good solution.
I have a table (MySql MyISAM) containing 300,000 rows (one column is blob field).
I must use:
DELETE FROM tablename WHERE id IN (1,4,7,88,568,.......)
There are nearly 30,000 id's in the IN syntax.
It takes nearly 1 hour. Also It does not make the .MYD file smaller although I delete 10% of it, so I run OPTIMIZE TABLE... command. It also lasts long...(I should use it, because disk space matters for me).
What's a way to improve performance when deleting the data as above and recover space? (Increasing buffer size? which one? or else?)
With IN, MySQL will scan all the rows in the table and match the record against the IN clause. The list of IN predicates will be sorted, and all 300,000 rows in the database will get a binary search against 30,000 ids.
If you do this with JOIN on a temporary table (no indexes on a temp table), assuming id is indexed, the database will do 30,000 binary lookups on a 300,000 record index.
So, 300,000 binary searches against 30,000 records, or 30,000 binary searches against 300,000 records... which is faster? The second one is faster, by far.
Also, delaying the index rebuilding with DELETE QUICK will result in much faster deletes. All records will simply be marked deleted, both in the data file and in the index, and the index will not be rebuilt.
Then, to recover space and rebuild the indexes at a later time, run OPTIMIZE TABLE.
The size of the list in your IN() statement may be the cause. You could add the IDs to a temporary table and join to do the deletes. Also, as you are using MyISAM you can use the DELETE QUICK option to avoid the index hit whilst deleting:
For MyISAM tables, if you use the QUICK keyword, the storage engine
does not merge index leaves during delete, which may speed up some
kinds of delete operations.
I think the best approach to make it faster is to create a new table and insert into it the rows which you dont want to delete and then drop the original table and then you can copy the content from the table to the main table.
Something like this:
INSERT INTO NewTable SELECT * FROM My_Table WHERE ... ;
Then you can use RENAME TABLE to rename the copy to the original name
RENAME TABLE My_Table TO My_Table_old, NewTable TO My_Table ;
And then finally drop the original table
DROP TABLE My_Table_old;
try this
create a table name temptable with a single column id
insert into table 1,4,7,88,568,......
use delete join something like
DELETE ab, b FROM originaltable AS a INNER JOIN temptable AS b ON a.id= b.id where b.id is null;
its just an idea . the query is not tested . you can check the syntax on google.

how to delete duplicate records in mysql table

I'm having an issue with finding and deleting duplicate records, I have a table with IDs called CallDetailRecordID which I need to scan and delete records, the reason there are duplicates is that I'm exporting data to special arching engine works with MySQL and it doesn't support indexing.
I tried using "Select DISTINCT" but it dosn't work, is there is another way? I'm hoping I can create a store procedure and have it run weekly to perform clean up.
your help is highly appreciated.
Thank you
CREATE TABLE tmp_table LIKE table
INSERT INTO tmp_table (SELECT * FROM table GROUP BY CallDetailRecordID)
RENAME table TO old_table
RENAME tmp_table to table
Drop the old table if you want, add a LOCK TABLES statement at the beginning to avoid lost inserts.