mysqldump table without dumping the primary key - mysql

I have one table spread across two servers running MySql 4. I need to merge these into one server for our test environment.
These tables literally have millions of records each, and the reason they are on two servers is because of how huge they are. Any altering and paging of the tables will give us too huge of a performance hit.
Because they are on a production environment, it is impossible for me to alter them in any way on their existing servers.
The issue is the primary key is a unique auto incrementing field, so there are intersections.
I've been trying to figure out how to use the mysqldump command to ignore certain fields, but the --disable-keys merely alters the table, instead of getting rid of the keys completely.
At this point it's looking like I'm going to need to modify the database structure to utilize a checksum or hash for the primary key as a combination of the two unique fields that actually should be unique... I really don't want to do this.
Help!

To solve this problem, I looked up this question, found #pumpkinthehead's answer, and realized that all we need to do is find+replace the primary key in each row with the NULL so that mysql will use the default auto_increment value instead.
(your complete mysqldump command) | sed -e "s/([0-9]*,/(NULL,/gi" > my_dump_with_no_primary_keys.sql
Original output:
INSERT INTO `core_config_data` VALUES
(2735,'default',0,'productupdates/configuration/sender_email_identity','general'),
(2736,'default',0,'productupdates/configuration/unsubscribe','1'),
Transformed Output:
INSERT INTO `core_config_data` VALUES
(NULL,'default',0,'productupdates/configuration/sender_email_identity','general'),
(NULL,'default',0,'productupdates/configuration/unsubscribe','1'),
Note: This is still a hack; For example, it will fail if your auto-increment column is not the first column, but solves my problem 99% of the time.

if you don't care what the value of the auto_increment column will be, then just load the first file, rename the table, then recreate the table and load the second file. finally, use
INSERT newly_created_table_name (all, columns, except, the, auto_increment, column)
SELECT all, columns, except, the, auto_increment, column
FROM renamed_table_name

You can create a view of the table without the primary key column, then run mysqldump on that view.
So if your table "users" has the columns: id, name, email
> CREATE VIEW myView AS
SELECT name, email FROM users
Edit: ah I see, I'm not sure if there's any other way then.

Clone Your table
Drop the column in clone table
Dump the clone table without the structure (but with -c option to get complete inserts)
Import where You want

This is a total pain. I get around this issue by running something like
sed -e "s/([0-9]*,/(/gi" export.sql > expor2.sql
on the dump to get rid of the primary keys and then
sed -e "s/VALUES/(col1,col2,...etc.) VALUES/gi" LinxImport2.sql > LinxImport3.sql
for all of the columns except for the primary key. Of course, you'll have to be careful that ([0-9]*, doesn't replace anything that you actually want.
Hope that helps someone.

SELECT null as fake_pk, `col_2`, `col_3`, `col_4` INTO OUTFILE 'your_file'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM your_table;
LOAD DATA INFILE 'your_file' INTO TABLE your_table
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n';
For added fanciness, you can set a before insert trigger on your receiving table that sets the new primary key for reach row before the insertion occurs, thereby using regular dumps and still clearing your pk. Not tested, but feeling pretty confident about it.

Use a dummy temporary primary key:
Use mysqldump normally --opts -c. For example, your primary key is 'id'.
Edit the output files and add a row "dummy_id" to the structure of your table with the same type as 'id' (but not primary key of course). Then modify the INSERT statement and replace 'id' by 'dummy_id'. Once imported, drop the column 'dummy_id'.

jimyi was on the right track.
This is one of the reasons why autoincrement keys are a PITA. One solution is not to delete data but add to it.
CREATE VIEW myView AS
SELECT id*10+$x, name, email FROM users
(where $x is a single digit uniquely identifying the original database) either creating the view on the source database (which you hint may not be possible) or use an extract routine like that described by Autocracy or load the data into staging tables on the test box.
Alternatively, don't create the table on the test system - instead put in separate tables for the src data then create a view which fetches from them both:
CREATE VIEW users AS
(SELECT * FROM users_on_a) UNION (SELECT * FROM users_on_b)
C.

The solution I've been using is to just do a regular SQL export of the data I'm exporting, then removing the primary key from the insert statements using a RegEx find&replace editor. Personally I use Sublime Text, but I'm sure TextMate, Notepad++ etc. can do the same.
Then I just run the query in which ever database the data should be inserted to by copy pasting the query into HeidiSQL's query window or PHPMyAdmin. If there's a LOT of data I save the insert query to an SQL file and use file import instead. Copy & paste with huge amounts of text often makes Chrome freeze.
This might sound like a lot of work, but I rarely use more than a couple of minutes between the export and the import. Probably a lot less than I would use on the accepted solution. I've used this solution method on several hundred thousand rows without issue, but I think it would get problematic when you reach the millions.

I like the temporary table route.
create temporary table my_table_copy
select * from my_table;
alter table my_table_copy drop id;
// Use your favorite dumping method for the temporary table
Like the others, this isn't a one-size-fits-all solution (especially given OP's millions of rows) but even at 10^6 rows it takes several seconds to run but works.

Related

Adding a UNIQUE key to a large existing MySQL table which is receiving INSERTs/DELETEs

I have a very large table (dozens of millions of rows) and a UNIQUE index needs to be added to a column on that table. I know for a fact that the table does contain duplicated values on that key, which I need to clean up (by deleting rows/resetting the value of the column to something unique that I can automatically generate). A plus is that the rows which are already duplicated do not get modified anymore.
What would be the right approach to perform a change like this, given that I will be probably using the Percona pt-osc tool and there are continuous deletes/inserts on the table? My plan was:
Add code that ensures no dupe IDs get inserted anymore. Probably I need to add a separate table for this temporarily, since I want the database to enforce this for me and not the application - so insert into the "shadow table" with a unique index in a transaction together with my main table, rollback all inserts that try to insert duplicate values
Backfill the table by zapping all invalid column values which are within the primary key range below $current_pkey_value
Then add the index and use pt-osc to changeover the table
Is there anything I am missing?
Since we use pt-online-schema-change we are using triggers for performing the synchronisation from the existing table to a temp table. The tool actually has a special configuration key for this, --no-check-unique-key-change, which will do exactly what we need - agree to perform the ALTER TABLE and set up triggers in such a way that if a conflict occurs, INSERT .. IGNORE will be applied and the first row having used the now-unique value will win in the insert during synchronisation. For us this is a good tradeoff because all the duplicates we have seen resulted from data races, not from actual conflicts in the value generation process.

re-inserting a table record and updating an auto increment primary index

I'm running MariaDB 5.5.56.
I'm looking to copy an entire row in a database, change one column, then insert the entire row back into the original database (I don't want to have to specify the individual fields because there's a lot of them). The problem I'm running into is how to deal with an auto-increment/primary key column.
example:
create temporary table t_ownership like ownership;
insert into t_ownership (select * from ownership where name='x' LIMIT 1);
update t_ownership set id='something else';
insert into ownership (select * from t_ownership);
I have a column "recno" that is an auto-increment that will create a collision in the database when I try to re-insert the slightly changed record back into the original table.
Something like this seems to work but doesn't result in an insert:
insert into ownership (select * from t_ownership) ON DUPLICATE KEY UPDATE recno=LAST_INSERT_ID(ownership.recno);
The above statement executes without error but does not add a row to table ownership.
So I think I'm close but not quite there...
What would be the best way to do this? I'd like to avoid doing an insert where I manually specify field/values. I just need to regenerate a new A.I. recno column on the insert.
NULL values inserted into auto-incremented fields end up just getting the next auto-increment value, behaving equivalent to INSERTing without specifying the field; so you should be able to update the source (temp copy) to have NULL for that field.
However, one potential issue that could present itself in scenarios like yours is that the CREATE TEMPORARY TABLE ... LIKE could result in a table that would not allow you to set such fields to NULL; this would require you to either ALTER the temporary table, or create it in a more explicit manner. Either way, it now makes code/queries that do not specify columns even more reliant on knowing columns.
Personally, I would take this route in the first place.
INSERT INTO theTable([list all but the auto-inc column])
SELECT [list all but the auto-inc column, with any replacements or modifications desired]
FROM ...[original query]...
It accomplishes the task in one query, makes the queries more self documenting, and only at the cost of a little typing (most of which a decent database browser, or query builder, will do for you).
The only argument really in favor of your current approach is that the table involved can be changed without necessarily breaking your queries; but that begs the question of whether it would be better for such table changes to break the queries, forcing them to be re-examined. If it is not an issue, it is a minor revision; but the alternative is queries that continue to be valid that have the potential to cause unexpected behavior due to copying information they were never intended to.

Fast delete duplicate records in MySQL

I'm trying to import very big SQL dump (around 37 million rows) into InnoDB table. There are tons of duplicates and what I want to achieve is, without changing actual dump want to prevent duplicate row insertion. The field email might have duplicates. I tried following: after importing whole dump into db I tried to execute following SQL:
set session old_alter_table=1;
ALTER IGNORE TABLE sample ADD UNIQUE (email);
But second query worked around 1 hour and then I just canceled this query.
What is proper way to get rid off duplicates?
I have couple of ideas:
Maybe before starting to import to make a table with unique index and while insertion to prevent duplicates without harming whole process?
Maybe after importing dump to select distinct email and to insert into another table?
From a .dump file
When importing, use -f for "force":
mysql -f -p < 2015-10-01.sql
This causes the import to continue after an error is encountered, which is useful in this case if you create the unique key constraint before importing.
From a .csv file
If you are using "LOAD DATA", use "IGNORE", e.g.:
LOAD DATA LOCAL INFILE 'somefile.csv' IGNORE
INTO TABLE some_db.some_tbl
FIELDS TERMINATED BY ';'
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(`somefield1`,`somefield2`);
According to the documentation:
If you specify IGNORE, rows that duplicate an existing row on a unique
key value are discarded.
This requires you to create the unique key constraint before importing, which will be fast on an empty table.
Edit the dump file as follows:
Modify the CREATE TABLE statement to add a unique key on the email field, or add an ALTER TABLE statement after it.
Find all the INSERT INTO sample statements, and change them to INSERT IGNORE INTO sample.
You could also do step 2 using a pipeline:
sed 's/INSERT INTO sample/INSERT IGNORE INTO sample/' sample_table.dump | mysql -u root -p sample_db
If the file is too big to edit to add the ALTER TABLE statement, I suggest you create the dump with the --no-create-info option to mysqldump, and create the table by hand (with the unique key) before loading the dump file.

LOAD DATA LOCAL INFILE help required

Here's my query for loading mysql table using csv file.
LOAD DATA LOCAL INFILE table.csv REPLACE INTO TABLE table1 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\'' LINES TERMINATED BY 'XXX' IGNORE 1 LINES
SET date_modified = CURRENT_TIMESTAMP;
Suppose my CSV contains 500 records with 15 columns. I changed three rows and terminated them with 'XXX'. I now want to update the mysql table with this file. My primary key is auto-incremented value. When I run this query, all the 500 rows are getting updated with old data and the rows I changed are getting added as new ones. I dont want the new ones. I want my table to be replaced with csv as-is. I tried changing my primary key to non-AI, it still didnt work. Any pointers please?? Thanks.
I am making some assumptions here.
1) You dont have the autonumber value in your file.
Since your primary key is not in your file MySQL will not be able to match rows. A autonumber primary key is a artificial key thus it is not part of the data. MySQL adds this artificial primary key when the row is inserted.
Lets assume your file contained some unique identifier lets call it Identification_Number. This number is both in the file and your table uses it as a primary key in this case MySQL will be able to identify the rows from the file and match them to the rows in the table.
While a lot of people will only use autonumbers in a database I always check if there is not a natural key in the data. If I identify one I do some performance testing with this natural key in a table. Then based on the performance metrics of both I then decide on a key.
Hopefully I did not get your question wrong but I suspect this might be the case.

How do I remove redundant Primary Keys from a MYSqL Table?

Ihave been developing an app for some time. This involves entering and deleteing alot of useless data in the tables. Now that I want to go to production I want to get rid of all the data but also restore all the 'IDs' ( primary keys ) to 0 so that the live system can start fresh with sensible ID's like 1,2,3 etc.
Using MySQL and PHP / Codeigniter
Many Many Thanks for yoru help !
I would normally use TRUNCATE - this both removes the data and resets the AUTO_INCREMENT.
Note that MySQL will perform a row by row deletion if there is a foreign key relationship, which is quite convenient (compared to SQL Server).
If your pk is autoincrement, you can do
ALTER TABLE tbl AUTO_INCREMENT =1
Make sure table is empty before executing the query.