i am using bash to load a file in mysql, and i have:
mysql --local-infile=1 -u user data_base_1 < file.sql
and file.sql is :
..$ cat file.sql
load data local infile '/folder/load.csv' into table table_1 fields terminated by '|'
The code works fine.
The problem is that if the PK of one row in the file exist, the row is not inserted, and i need if the row exist insert and replace the row in the table. How can i do it?
Who can help me?
Thanks
You can specify REPLACE with LOAD DATA:
LOAD DATA LOCAL INFILE '/folder/load.csv' REPLACE INTO TABLE table_1 FIELDS TERMINATED BY '|'
Or else use the mysqlimport --replace option.
http://dev.mysql.com/doc/refman/5.6/en/mysqlimport.html#option_mysqlimport_replace
You could load into a temporary table and then execute two SQL statements:
UPDATE table
WHERE ... (match found)
;
INSERT into table(...)
SELECT ...
FROM temp_table
WHERE NOT EXISTS(...)
Related
I am trying to insert some data using Load data infile into a mysql table that already has some data. The table contains id and name. My csv file contains three fields: id, name and code. The table schema also has these three fields, but currently, the table has NULL for the code field. I am trying to insert the code from csv to an existing row in the table if it matches the name, else I am trying to insert a complete new row.
The code I have tried is as follows:
LOAD DATA INFILE 'table1.csv'
INTO TABLE table1
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(#code, #name, #other_columns)
IF EXISTS (SELECT * FROM table1 where name=#name);
BEGIN
set code=#Code;
END
ELSE
BEGIN
set code=#Code, name=#name;
END
By doing so, I am getting a mysql syntax error, but am unable to figure it out. Can anyone point me in the right direction, or suggest to me another approach? I have thousands of new rows to insert and thousands of existing rows to modify based on the certain field, (name in this case).
MySQL does not allow the LOAD DATA INFILE statement inside a stored program, which is where the IF statement appears. Break up your task into two parts. First, LOAD DATA INFILE into a temporary table. Then create a stored program that replaces the loaded data into your table1 from the temporary table.
I have a big table(more than 60k rows), I am trying to copy unique rows from this table to another table. Query is as follows
INSERT INTO tbl2(field1, field2)
SELECT DISTINCT field1, field2
FROM tbl1;
But it is taking ages to run this query, can someone suggest any way to accelerate this process
Execute a mysqldump of your table, generating a sql file, then filter duplicated data with a shell command:
cat dump.sql | uniq > dump_filtered.sql
Check the generated file. Then create your new table and load your dump_filtered.sql file with LOAD DATA INFILE.
Try this:
1. drop the destination table: DROP DESTINATION_TABLE;
2. CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);
I have a column in a MySQL table that consists of comma-delimited strings. I would like to convert this column into a set of distinct strings that occur in the column (for any row in the table -- the set includes strings that occur in any row of the table in this column). What is the easiest way to accomplish this?
The solution doesn't need to be pure MySQL. It could involve unix, perl, etc.
You could probably get a quick-and-dirty list of distinct strings from a comma-delimited column using SELECT INTO OUTFILE, sed, and LOAD DATA INFILE.
Basically you want to dump the text data to a flat file, using a comma as the line delimiter so each string will be treated as a separate row when you load it back into the database. Then load the extracted into a new table and select the distinct values.
Off the top of my head, the code would look something like this:
select str
into outfile
'/tmp/your_table_data.txt'
lines terminated by ','
from your_table;
sed -e 's/\\,/,/g' -e 's/,$//' /tmp/your_table_data.txt > /tmp/commadelimited.txt
create table your_table_parsed(str text);
load data infile '/tmp/commadelimited.txt'
ignore into table your_table_parsed
fields terminated by ','
lines terminated by ',';
select distinct str from your_table_parsed;
The way I chose was to run the select mysql command outside of the mysql shell and pipe the results into tr and sort --uniq
mysql my_db [-p -u -h] -se "select my_column from my_table;" | tr ',' '\n' | sort -u
This is pretty simple and seems to give the correct results as far as I can tell.
I need to update existing rows in table with load data infile based on some condition, is this possible?
load data infile 'E:/xxx.csv'
into table tld_tod
#aaa, #xxx_date, #ccc
fields terminated by ','
LINES TERMINATED BY '\r\n'
set xxx = str_to_date(#xxx_date, '%d-%b-%y')
where xxx is not null and aaa=#aaa
You ca also create a staging table, insert the data from the CSV file into the staging table and then finally insert the data into your target table with the required operations and filtering.
CREATE TEMPORARY TABLE staging LIKE tld_tod;
LOAD DATA INFILE 'E:/xxx.csv'
INTO TABLE staging
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\r\n';
INSERT INTO tld_tod
SELECT STR_TO_DATE(col_date, '%d-%b-%y') AS date
WHERE col_date IS NOT NULL;
In MySQL it's possible to create triggers before update. So in this case I suggest to use:
delimiter //
CREATE TRIGGER upd_check BEFORE UPDATE ON table
FOR EACH ROW
BEGIN
IF NEW.xxx IS NOT NULL THEN
SET NEW.xxx = 0;
END IF;
END;//
delimiter ;
After creating trigger, you can run load data infile without WHERE.
I'm not sure what's your specific required condition, but do it inside BEGIN and END.
Say I have a view in my database, and I want to send a file to someone to create that view's output as a table in their database.
mysqldump of course only exports the 'create view...' statement (well, okay, it includes the create table, but no data).
What I have done is simply duplicate the view as a real table and dump that. But for a big table it's slow and wasteful:
create table tmptable select * from myview
Short of creating a script that mimics the behaviour of mysqldump and does this, is there a better way?
One option would be to do a query into a CSV file and import that. To select into a CSV file:
From http://www.tech-recipes.com/rx/1475/save-mysql-query-results-into-a-text-or-csv-file/
SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
OK, so based on your CSV failure comment, start with Paul's answer. Make the following change to it:
- FIELDS TERMINATED BY ','
+ FIELDS TERMINATED BY ',' ESCAPED BY '\'
When you're done with that, on the import side you'll do a "load data infile" and use the same terminated / enclosed / escaped statements.
Same problem here my problem is that I want to export view definition (84 fields and millions of records) as a "create table" statement, because view can variate along time and I want an automatic process. So that's what I did:
Create table from view but with no records
mysql -uxxxx -pxxxxxx my_db -e "create table if not exists my_view_def as select * from my_view limit 0;"
Export new table definition. I'm adding a sed command to change table name my_view_def to match original view name ("my_view")
mysqldump -uxxxx -pxxxxxx my_db my_view_def | sed s/my_view_def/my_view/g > /tmp/my_view.sql
drop temporary table
mysql -uxxxx -pxxxxxx my_db -e "drop table my_view_def;"
Export data as a CSV file
SELECT * from my_view into outfile "/tmp/my_view.csv" fields terminated BY ";" ENCLOSED BY '"' LINES TERMINATED BY '\n';
Then you'll have two files, one with the definition and another with the data in CSV format.