How to execute SQL queries skipping error? - mysql

I try to import to table data from sql files using command line.
This data contents duplicates in filed url.
But field url in table is unique. So when I try to insert data I get error "Dublicate entry"
How to inmport all data skipping this error?

You can to use the --force (-f) flag.
mysql -u userName -p -f -D dbName < script.sql
From man mysql:
ยท --force, -f
Continue even if an SQL error occurs.

Create a staging table with the same structure as your destination
table but without the constraints (unique index included).
Manually check the duplicates and decide on the way you want to
choose between duplicates rows / merge rows.
Write the appropriate query and use "insert into ... select ...".

How to inmport all data skipping this error?
Drop the index for time being -> run your batch insert -> recreate the index back

If you are using insert, the you can ignore errors using ignore error or on duplicate key update (preferable because it only ignores duplicate key errors).
If you are using load data infile, then you can use the ignore key word. As described in the documentation:
If you specify IGNORE, rows that duplicate an existing row on a
unique key value are discarded. For more information, see Comparison
of the IGNORE Keyword and Strict SQL Mode.
Or, do as I would normally do:
Load the data into a staging table.
Validate the staging table and only load the appropriate data into the final table.

Related

Fast delete duplicate records in MySQL

I'm trying to import very big SQL dump (around 37 million rows) into InnoDB table. There are tons of duplicates and what I want to achieve is, without changing actual dump want to prevent duplicate row insertion. The field email might have duplicates. I tried following: after importing whole dump into db I tried to execute following SQL:
set session old_alter_table=1;
ALTER IGNORE TABLE sample ADD UNIQUE (email);
But second query worked around 1 hour and then I just canceled this query.
What is proper way to get rid off duplicates?
I have couple of ideas:
Maybe before starting to import to make a table with unique index and while insertion to prevent duplicates without harming whole process?
Maybe after importing dump to select distinct email and to insert into another table?
From a .dump file
When importing, use -f for "force":
mysql -f -p < 2015-10-01.sql
This causes the import to continue after an error is encountered, which is useful in this case if you create the unique key constraint before importing.
From a .csv file
If you are using "LOAD DATA", use "IGNORE", e.g.:
LOAD DATA LOCAL INFILE 'somefile.csv' IGNORE
INTO TABLE some_db.some_tbl
FIELDS TERMINATED BY ';'
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
(`somefield1`,`somefield2`);
According to the documentation:
If you specify IGNORE, rows that duplicate an existing row on a unique
key value are discarded.
This requires you to create the unique key constraint before importing, which will be fast on an empty table.
Edit the dump file as follows:
Modify the CREATE TABLE statement to add a unique key on the email field, or add an ALTER TABLE statement after it.
Find all the INSERT INTO sample statements, and change them to INSERT IGNORE INTO sample.
You could also do step 2 using a pipeline:
sed 's/INSERT INTO sample/INSERT IGNORE INTO sample/' sample_table.dump | mysql -u root -p sample_db
If the file is too big to edit to add the ALTER TABLE statement, I suggest you create the dump with the --no-create-info option to mysqldump, and create the table by hand (with the unique key) before loading the dump file.

How can I get this SQL dump to import into PhpMyAdmin without this error?

I am trying to import a SQL dump from a live WordPress site into my local MAMP dev environment using PhpMyAdmin so I can make edits to the site locally. I keep getting this error:
Error
SQL query: INSERT INTO `wp_options` VALUES (259568, '_transient_timeout_geoip_98.80.4.79', '1440122500', 'no');
MySQL said: Documentation
#1062 - Duplicate entry '259568' for key 'PRIMARY'
My knowledge of SQL is minimal. What could be causing this and what do I need to do in order to fix the problem so that I can successfully import the database and get the site up and running locally?
You can replace INSERT statement with INSERT IGNORE. That helps import entries even if they have duplicates.
If you use unix-like OS, you can use sed command to replace insert:
cat dump.sql | sed s/"^INSERT"/"INSERT IGNORE"/g > dump-new.sql
Or you can add option --insert-ignore for mysqldump to write INSERT IGNORE statements rather than INSERT statements.
For export via phpMyAdmin it's possible to set an option:
Settings -> Export -> SQL -> Use ignore inserts
Your table already has a record with a primary key value of 259568, and primary keys are required to be unique. Deleting the existing record would allow you to insert this one, but deleting the existing record may cause problems as well.

INSERT...ON DUPLICATE KEY UPDATE in mysql workbench

Original Question
MySQL workbench allows one to define "inserts": rows to be inserted into the database on creation. It does this by adding lines such as
START TRANSACTION;
USE `someDB`;
INSERT INTO `someDB`.`countries` (`name`) VALUES ('South Africa');
COMMIT;
However, if the database, table and entry exists, this throws an error. Creation of tables does not, as workbench uses CREATE IF NOT EXISTS for those. Is there a way to get workbench to insert using INSERT...ON DUPLICATE KEY UPDATE?
Half Solution
Running the script with the force argument:
mysql user=xx password=xx --force < script.sql
Ignores such errors, and is thus a solution in my particular case. However, the actual question of modifying the type of INSERTS still stands (for interest)
See here

Ignore mysql error messages when executing an sql file

I am copying records from one table to another and there is a chance that some records may already be in the second table i am copying the records to.
Since there are lots of rows i am copying,i am looking a way to ignore all the record already exists messages from mysql and continue executing the mysql file.
Is there a way i can suppress the error messages?.
As documented under INSERT Syntax:
The INSERT statement supports the following modifiers:
[ deletia ]
If you use the IGNORE keyword, errors that occur while executing the INSERT statement are treated as warnings instead. For example, without IGNORE, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table causes a duplicate-key error and the statement is aborted. With IGNORE, the row still is not inserted, but no error is issued.
Try this:
zcat db.sql.gz | sed -e 's/INSERT/INSERT IGNORE/' | mysql -u user -p dbname

Import and overwrite existing data in MySQL

I have data in a MySQL table with a unique key. I want to import more recent data that is stored in a CSV at the moment. I would like it to overwrite the old data if the key already exists, or create a new row if the key does not exist. Does anyone know how to this in MySQL?
Thank you for your help!
Jeff
Use INSERT ... ON DUPLICATE KEY UPDATE.
INSERT INTO table (column) VALUES ('value') ON DUPLICATE KEY UPDATE column='value'
use INSERT ... ON DUPLICATE KEY UPDATE
I was looking for the answer to the originator's exact question here I can see by his last comment/question he was looking for something more. It stimulated the following solution.
Nestling another shell environment (e.g. MYSQL) inside a script or batch file brings a lot of headaches switching syntaxes. I tend to look for solutions that operate within one shell to cut down on those complications. I have found this command string:
mysqlimport --fields-terminated-by=, --ignore-lines=1 --local -uMYSQL_ACCT -pACCT_PWD YOUR_DB_NAME /PATH_TO/YOUR_TABLE_NAME.csv
I got this idea from Jausion's comment at MySQL 5.0 RefMan :: 4.5.5 mysqlimport w/Jausions Comment In a nutshell, you may import to a database table in a csv format by simply naming the csv file after the table and append the .csv extension. You may append to the table and even overwrite rows.
Here is a real life csv file content of one of my operations. I like to make human readable csv files that include the column headers in the first line, hence the --ignore-lines=1 option.
id,TdlsImgVnum,SnapDate,TdlsImgDesc,ImageAvbl
,12.0.3.171-090915-1,09/09/2015,Enhanced CHI,Y
NOTICE the comma is the first char, making the first field value "NULL".
Here is the linux bash command that created the second line:
echo null,"$LISTITEM","$IMG_DATE","$COMMENTS","$AVBL" | tee -a YOUR_TABLE_NAME.csv
What is important to know here is that the null field for the primary key id field allows mysql auto-increment to be applied and then just adds a new row to your table. Sorry, can't recall if I read this somewhere or learned it the hard way:)
So, Viola!, conversely and of MORE importance to this question is, you may OVERWRITE a whole row of data by supplying the primary key of the row in question.
I am just in the throes of designing a new table to fulfill exactly these requirements with the overwrite operation but, as I alluded to, I already use the NULL append-a-row auto-increment option.