A user of a WordPress site with a Form plugin accidentally delete ALL of the entries for a specific form.
I went into the daily backup and I have a .sql file which has all of the data for the table that the form info is stored.
Now I need to merge that back into the database, but the dump uses INSERT INTO and stops immediately with an error because most of the entries already exist.
I tried using "ON DUPLICATE KEY UPDATE id=id", but it ignored everything.
I've been searching here and on Google for a couple hours without any kind of solution.
The basic of the dumps is:
LOCK TABLES `wp_frm_items` WRITE;
INSERT INTO `wp_frm_items` (`id`, `item_key`, `name`, `description`, `ip`, `form_id`, `post_id`, `user_id`, `parent_item_id`, `updated_by`, `created_at`, `updated_at`) VALUES (2737,'jb7x3c','Brad Pitt','a:2:{s:7:\"browser\";s:135:\"Mozilla/5.0 (iPhone; CPU iPhone OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10B329 Safari/8536.25\";s:8:\"referrer\";s:38:\"http://mysite/myform/\r\n\";}','192.168.1.1',6,0,NULL,NULL,NULL,'2013-06-30 15:09:20','2013-06-30 15:09:20');
UNLOCK TABLES;
ID #2737 exists, so I either want to ignore it or just update the existing table.
Seems like there would be an easy way to import data from a MySQL dump into an existing database.
ps. I'm trying to do this in phpMyAdmin
If the data has not changed for those rows, you can use REPLACE instead of INSERT.
If you want to skip rows, one possibility is to use a temporary table. Load the rows there and DELETE those rows that have a id that exists in the old table.
DELETE FROM my_new_temptable AS temp WHERE temp.id IN (SELECT id FROM wp_frm_items)
Then just insert the remaining rows into wp_frm_items.
Or you can move the new rows to a temporary table before restoring from the dump and copy them from there back into original table. There are many possibilities.
Also, many SQL tools have table merging capabilities.
Related
I have 2 databases from Wordpress website.
There was happenned issue and 50% of my posts dissapeared.
I have database 1 copy from 03.03.21
And existing database 2 of website from 24.03.21
So in database 1 i have many posts thats was deleted
And the database 2 has some new posts that not exist in older database 1
Is there any software or a way to merge these 2 database.
To compare databases and add entries to the newer database that are in the older database?
I could do this manullay but one post has entries in a many tables and its gonna be hard to recover deleted posts
There is no easy solution but you could try to make a "merge" locally for testing purposes.
Here's how I would do it, I can't guarrantee it will work.
1. Load the oldest backup into the server, let's say in a database named merge_target.
2. Load the 2nd backup (the most recent one) into the same server, let's say in a merge_source database.
3. Define a logical order to execute the merge for each table, this depends on the presence of foreign keys:
If a table A has a foreign key referencing table B, then you will need to merge table B before table A.
This may not work depending on your database structure (and I never worked with WordPress myself).
4. Write and execute queries for each table, with some rules:
SELECT from the merge_source database
INSERT into the merge_target database
if a row already exists in merge_target (i.e. they have the same primary key or unique key), you can use MySQL features depending on what you want to do:
INSERT ON DUPLICATE KEY UPDATE if the existing row should be updated
INSERT IGNORE if the row should just be skipped
REPLACE if you really need to delete and re-insert the row
This could look like the following query (here with ON DUPLICATE KEY UPDATE):
INSERT INTO merge_target (col_a, col_b, col_c)
SELECT
col_a
, col_b
, col_c
FROM merge_source
ON DUPLICATE KEY UPDATE
merge_target.col_b = merge_source.col_b
Documentation:
INSERT ... SELECT
ON DUPLICATE KEY UPDATE
REPLACE
INSERT IGNORE is in the INSERT documentation page
Not sure it will help but I wrote a database migration framework in PHP, you can still take a look: Fregata.
I recently updated my database, causing it to erase all data. Unfortunately, I imported a 2 weeks old sql backup in my new database instead of the newest...
How can I import the missing data without destroying the new one (2 days have passed since the server update) ? The IDs have been taken by the new data so now my newest SQL backup that says INSERT INTO table1 (ID, ID_Table2) VALUES (123, 456) are no longer true since that reference to Table2 will need the new ID !
Well, you can select all rows that are not present yet. There might be a more elegant solution, but this should do the trick;
INSERT INTO `new_database`.`new_table` (`column_1`, `column_2`, ...)
SELECT `column_1`, `column_2`, ... FROM `old_database`.`old_table` AS `ot`
WHERE `ot`.`id` NOT IN (SELECT `id` FROM `new_database`.`new_table`)
Basically what you're doing here is selecting all the rows from the old table which are not present in the new database (the NOT IN clause), and insert these rows.
I get a report in a tab delimited file which stores some SKUs and the current quantities of them.
Which means most of the time the inventory is the same and we just have to update the quantities.
But it can happen, that a new SKU is in the list which we have to insert instead of updating.
We are using an INNODB table for storing those SKUs. At the moment we just cut the file by tabs and line breaks and make an INSERT ... ON DUPLICATE KEY UPDATE query which is quite inefficient, because INSERT is expensive at INNODB, right? Also tricky because when a list with a lot of SKUs coming in > 20k it just take some minutes.
So my resolution for now is to just make a LOAD DATA INFILE into an tmp table and afterwards do the INSERT ... ON DUPLICATE KEY UPDATE, which should be faster i think.
Also is there another solution which does a simple UPDATE in the first place and only if there are some left, it performs and INSERT? This would be perfect, but yet i could not find anything about it. Is there a way to delete rows which returned an update: 1?
Sort the CSV file by the PRIMARY KEY of the table.
LOAD DATA INFILE into a separate table (as you said)
INSERT INTO real_table SELECT * FROM tmp_table ON DUPLICATE KEY UPDATE ... -- Note: This is a single INSERT.
Caveat: This may block the table from other uses during step 3. A solution: Break the CSV into 1000-row chunks. COMMIT after each chunk.
I have two tables:
tableOriginal
tableBackup
They have exactly the same structure.
I want a SQL statement I can run anytime of the day, that will copy all the rows from tableOriginal to tableBackup WITHOUT overwriting items in tableBackup. Basically, this command must synchronize tableBackup with tableOriginal.
How do I do that?
INSERT INTO tableBackup(SELECT * FROM tableOriginal)
As long as there is no issue with primary keys being updated or replaced with new incoming data this should not create an issue for you. However as you already know, backup table will have more data after your command since it did not delete previous data it had
Why don't you delete first all the data in tableBackup, then INSERT the data in tableOriginal to tableBackup
DELETE FROM tableBackup
INSERT INTO tableBackup(SELECT * FROM tableOriginal)
Why do we need to delete first?
Because if we're going to insert unique data into the tableBackup,
next time we insert it will not execute, because we will insert/add some data that is already been there..
Hope you get what I'm trying to say.
Hi I have a huge unnormalized mysql database with (~100 million) urls (~20% dupes) divided into identical split tables of 13 million rows each.
I want to move the urls into a normalized database on the same mySql server.
The old database table is unnormalized, and the url's have no index
It look like this:
entry{id,data,data2, data3, data4, possition,rang,url}
And i'm goin to slit it up into multiple tables.
url{id,url}
data{id,data}
data1{id,data}
etc
The first thing I did was
INSERT IGNORE INTO newDatabase.url (url)
SELECT DISTINCT unNormalised.url FROM oldDatabase.unNormalised
But the " SELECT DISTINCT unNormalised.url" (13 million rows) took ages, and I figured that that since "INSERT IGNORE INTO" also do a comparison, it would be fast to just do a
INSERT IGNORE INTO newDatabase.url (url)
SELECT unNormalised.url FROM oldDatabase.unNormalised
Without the DISTINCT, is this assumption Wrong?
Any way it still takes forever and i need some help, is there a better way of dealing withe this huge quantity of unnormalized data?
Whould it be best if i did a SELECT DISTINCT unNormalised.url" on the entire 100 milion row database, and exported all the id's, and then moved only those id's to the new database with lets say a php script?
All ideas are welcomed, i have no clue how to port all this date without it taking a year!
ps it is hosted on a rds amazon server.
Thank you!
As the MySQL Manual states that LOAD DATA INFILE is quicker than INSERT, the fastest way to load your data would be:
LOCK TABLES url WRITE;
ALTER TABLE url DISABLE KEYS;
LOAD DATA INFILE 'urls.txt'
IGNORE
INTO TABLE url
...;
ALTER TABLE url ENABLE KEYS;
UNLOCK TABLES;
But since you already have the data loaded into MySQL, but just need to normalize it, you might try:
LOCK TABLES url WRITE;
ALTER TABLE url DISABLE KEYS;
INSERT IGNORE INTO url (url)
SELECT url FROM oldDatabase.unNormalised;
ALTER TABLE url ENABLE KEYS;
UNLOCK TABLES;
My guess is that INSERT IGNORE ... SELECT will be faster than INSERT IGNORE ... SELECT DISTINCT but that's just a guess.