When using LOAD DATA INFILE, is there a way to get the same functionality provided by ON DUPLICATE KEY UPDATE of regular INSERT statements?
What I want to do is: for each line of my file, if the row doesn't exist, a new row is inserted, otherwise the selected fields are updated.
My table has 5 columns: A, B, C, D and E. A is the primary key. Sometimes, I have to insert new rows with all the values, but sometimes I have to update only B and C, for example. But, the point is that I want to regroup all the INSERT or UPDATE in the same file.
Thanks
If you want to insert/update some of fields, then you should load data into additional table, and then use INSERT, UPDATE or INSERT...SELECT+ON DUPLICATE KEY UPDATE statement to copy/modify data; otherwise other fields will be set to NULL.
The REPLACE option in LOAD DATA INFILE won't help you in this case.
Also, you can use Data Import tool (CSV format) in dbForge Studio for MySQL (free express edition), just choose Append/Update import mode and specify fields mapping in the Data Import wizard.
Related
I have a CSV file that I am loading into my database. I want the previous data in the table to be overwritten and not appended every time I load my CSV file. Is it possible to do this within a single query?
Is the only solution to TRUNCATE the table and then utilize the LOAD DATA INFILE queries?
Assuming you have a primary key, you can use REPLACE. As the documentation states:
The REPLACE and IGNORE modifiers control handling of input rows that
duplicate existing rows on unique key values:
If you specify REPLACE, input rows replace existing rows. In other words, rows that have the same value for a primary key or unique index
as an existing row. See Section 13.2.9, “REPLACE Statement”.
However, if you want to replace the existing table, then truncate the table first and then load.
Suppose I have a MySQL table with three fields: key, value1, value2
I want to load data for two fields (key,value1) from file inserts.txt.
Content of inserts.txt:
1;2
3;4
with:
LOAD DATA LOCAL INFILE
"inserts.txt"
REPLACE
INTO TABLE
`test_insert_timestamp`
FIELDS TERMINATED BY ';'
But in case of REPLACE, I want to leave the value2 unchanged.
How could I achieve this?
The REPLACE statement consists in the following algorithm:
MySQL uses the following algorithm for REPLACE (and LOAD DATA ... REPLACE):
Try to insert the new row into the table
While the insertion fails because a duplicate-key error occurs for a
primary key or unique index:
Delete from the table the conflicting row that has the duplicate key
value
Try again to insert the new row into the table
(https://dev.mysql.com/doc/refman/5.7/en/replace.html)
So you can't keep a value from a line which is going to be deleted.
What you want to do is emulating a "ON DUPLICATE KEY UPDATE" logic.
You can't do that within a single LOAD DATA query. What you have to do is to load your data in a temporary table first, then to make an INSERT from your temporary table to your destination table, where you will be able to use the "ON DUPLICATE KEY UPDATE" feature.
The whole process is fully detailed in the most upvoted answer of this question : MySQL LOAD DATA INFILE with ON DUPLICATE KEY UPDATE
Ok, so I have a database in my testing environment called 'Food'. In this database, there is a table called 'recipe', with a column called 'source'.
This same database exists in my local environment. However, I just received an updated database (in my local environment) where all the column values (for 'source') have changed.
Is there any way I can migrate the 'source' column from my local to my test environment, without changing the values for any other column? There are 1186 rows in the 'Food' database 'recipe' table in my test environment that need to be updated ONLY with the 'source' column.
You need some way to uniquely identify your Recipes. If both tables have a surrogate key that remained constant, use that. Otherwise figure out some way to match up the new data with your test data: you might already have a unique index in mind or you might need to decide on a combination of fields that uniquely identify your Recipes.
On a side note, why can't you just overwrite all the columns? It is just test data, right?
If only a column has changed and you have IDs (or keys) on your rows, you could follow these steps:
create an intermediate table locally
insert keys and new source values there (either those which have changed or all)
use mysqldump to selectively export the table from the local database
copy the dumped table to the remote database server
import it there
join it with the production table in an update statement to replace the values
drop the intermediate table on the server
I am attempting to import data into a table that has a field as follows:
result_id
This field is set to AUTO_INCREMENT, PRIMARY and UNIQUE.
The data I am importing has information in the result_id field that is the same (in places) as the current data in the table. SQL won't let me import as there are duplicates (which is fair enough).
Is there a way to get SQL to append the data I am importing and not use the duplicate data in result_id, basically to continue the number within the SQL field. The reason I am asking is that I am importing about 25,000 records and I don't want to manually have to remove or alter the result_id information from the data being imported.
Thanks,
H.
How are you importing your data to MySQL?
If you are using SQL queries/script, then there should be something like INSERT INTO.... Open the file in some text editor and replace all INSERT by INSERT IGNORE. This will ignore inserting rows with duplicate primary keys.
Or alternatively if you want to replace older data with same primary keys to that in your import script, then simply use REPLACE query in place of INSERT query.
Hope it helps...
[EDIT]
Since you have Primary key, auto increment. In your table in which you want to import data, add a dummy column say "dummy" and allow it to be NULL. Now, in your import script there will be statement like INSERT INTO () values (). Now in the list of column names replace "result_id" by "dummy" and execute the script.
After executing script simply remove "dummy" column from table. Though it is bit dirty and time consuming but will do your work.
I have a members table. Half the data/fields are populated through an online CMS.
But for the member's core contact detail fields, they come from a CSV exported from a desktop database.
I wanted to be able to upload this CSV and use the LOAD DATA command to update the members contact detail fields (matching on id) but without touching/erasing the other fields.
Is there a way to do this or must I instead loop through each row of the CSV and UPDATE... (if that's the case, any tips for the best way to do it?)
The Load Data Infile command supports the REPLACE keyword. This might be what you're looking for. From the manual:
REPLACE works exactly like INSERT,
except that if an old row in the table
has the same value as a new row for a
PRIMARY KEY or a UNIQUE index, the old
row is deleted before the new row is
inserted
The Load Data Infile command also has options where you can specify which columns to update, so perhaps you can upload the data, only specifying the columns which you want to update.