Fast load data into file split to tables connect by id - mysql

Have a MySQL database using InnoDB and Foreign Keys...
I need to import 100MiB of data from a huge CSV file and split it into two tables and the records have to be like follows
Table1
id|data|data2
Table2
id|table1_id|data3
Where Table2.table1_id is a foreign key referencing Table1.id.
The MySQL sequence for one instance would look like this
Load file into a temporary table
After that do an insert from temporary table to the needed
Get the last insert ID
Do the last insert group using this reference id...
That is utterly slow...
How do I do this using file load into...? Any real ideas with high speed result?

You could temporarily add column data3 to Table1 (I also add a done column to distinguish records which originate from the CSV from those that already exist/originate from elsewhere):
ALTER TABLE Table1
ADD COLUMN data3 TEXT,
ADD COLUMN done BOOLEAN DEFAULT TRUE;
LOAD DATA
INFILE '/path/to/csv'
INTO TABLE Table1 (data, data2, data3)
SET done = FALSE;
INSERT
INTO Table2 (table1_id, data3)
SELECT (id, data3) FROM Table1 WHERE NOT done;
ALTER TABLE Table1
DROP COLUMN data3,
DROP COLUMN done;

Related

Finding ID and inserting it into another table

I have a table with two columns. ID and WORD. I've used the following query to insert several files into this table
LOAD DATA LOCAL INFILE 'c:/xad' IGNORE INTO TABLE words LINES TERMINATED BY '\n' (#col1) set word=#col1;
Now I'd like to find specific values and insert them into another table. I know based on this question that I can do the following
insert into tab2 (id_customers, value)
values ((select id from tab1 where customers='john'), 'alfa');
But I'd like to do this based on the files. For example:
Loop through each line of file xad and pass it's value to a query like the following
insert into othertable (word_id)
values ((select id from firsttable where word='VALUE FROM CURRENT LINE OF FILE'));
I can write a Java app to do this line by line but I figured it'd be faster to make MySQL do the work if possible. Is there a way to make MySQL loop over each line, find the ID, and insert it into othertable?
Plan A: A TRIGGER could be used to conditionally copy the id to another table when encountered in whatever loading process is used (LOAD DATA / INSERT .. SELECT .. / etc).
Plan B: Simply load the table, then copy over the ids that you desire.
Notes:
The syntax for this
insert into tab2 (id_customers, value)
values ((select id from tab1 where customers='john'), 'alfa');
is more like
insert into tab2 (id_customers, value)
SELECT id, 'alpha'
FROM tab1
WHERE customers = 'john'

Update table from file with mysql [duplicate]

I have a table in a database, and I'd like to update a column which I have offline on a local file. The file itself has two columns
an ID which corresponds to an ID column in the table, and
the actual value.
I've been able to create new rows using
LOAD DATA INFILE 'file.txt' INTO TABLE table
FIELDS TERMINATED BY ','
But I'm not sure how I can specifically insert values in such a way that the ID column in the file is joined to the ID column in the table. Can someone help with the SQL syntax?
I suggest you load your data into a temporary table, then use an INSERT ... SELECT ... ON DUPLICATE KEY UPDATE; for example:
CREATE TEMPORARY TABLE temptable (
id INT UNSIGNED NOT NULL,
val INT,
PRIMARY KEY (id)
) ENGINE = MEMORY;
LOAD DATA LOCAL INFILE '/path/to/file.txt' INTO temptable FIELDS TERMINATED BY ',';
INSERT INTO my_table
SELECT id, val FROM temptable
ON DUPLICATE KEY UPDATE val = VALUES(val);
DROP TEMPORARY TABLE temptable;
Another way could be ...
Since you already know the table name as well have the ID and actual value ... what you can do is ... directly write the update statements in a file, like
update mytable set value_col = value where ID_col = ID;
Second Update Statement
Third Update statement
.......
Save the file as *.sql like, updatescript.sql and then execute that script directly like
mysql -h <hostname> -u root -p <your_db_name> < "E:/scripts/sql/updatescript.sql"
It depends of the no of rows ,
If it is in hundreds make a script of update column and run it , but if it is in large volume import that file in to a new table and update your table with a join , and then drop the table

Import CSV to Update rows in table

There are approximately 26K products (posts) and each product has meta values like this:
The post_id column is the product id in db and the _sku (meta_key) is the unique id for each product.
I've received a new CSV file that updates all of the values (meta_value) for _sale_price (meta_key) of each product. The CSV file looks like:
SKU, Sale Price
How do I import this CSV to update only the _sale_price row based on the post_id (product id) & _sku value?
Output Example:
I know how to do this in PHP by looping through the CSV and selecting & executing an update for each single product but this seems inefficient.
Preferably with phpMyAdmin and by using LOAD DATA INFILE.
You can use temporary table to hold the update data and then run single update statement.
CREATE TEMPORARY TABLE temp_update_table (meta_key, meta_value)
LOAD DATA INFILE 'your_csv_pathname'
INTO TABLE temp_update_table FIELDS TERMINATED BY ';' (meta_key, meta_value);
UPDATE "table"
INNER JOIN temp_update_table on temp_update_table.meta_key = "table".meta_key
SET "table".meta_value = temp_update_table.meta_value;
DROP TEMPORARY TABLE temp_update_table;
If product_id is the unique column of that table, you can do that using CSV:
Have a CSV file of those you want to import with their unique ID. CSV file must be in same order of the table column, put all your columns and no column name
Then in phpMyAdmin, go to the table of database, click import
Select CSV in the drop-down of Format field
Make sure "Update data when duplicate keys found on import (add ON DUPLICATE KEY UPDATE)" is checked.
You can import the new data into another table (table2). Then update your primary table (table1) using a update with a sub-select:
UPDATE table1 t1 set
sale_price = (select meta_value from table2 t2 where t2.post_id = t1.product_id)
WHERE
(select count(*) from table2 t2 where t1.product_id = t2.post_id) > 0
This is obviously a simplification and you will most likely need to constrain your query a little further.
Make sure to backup your full database before attempting. I recommend you work on a non-production database until the process works flawlessly.
It seems to me that rAndom69's answer does not work on postgresql 12 but the join with the WHERE work:
UPDATE tableA
SET fieldToPopulateInTableA = temp_update_table.fieldPopulated
FROM temp_update_table
WHERE tableA.correspondingField = temp_update_table.correspondingField

Mysql single column table --> insert in other table

I have two tables: t1 and t2
- t2 has only 1 column named stuff (60.000 entries).
- t1 has 15 columns, including stuff (empty). t1 has about 650.000 entries.
How can I import the data from t2.stuff in t1.stuff when I have nothing to match it against? (I just want to populate empty fields of t1.stuff with data from t2.stuff and don't care about matching ids or anything.)
The best case (i think) would be, that if I run this query about 11 times, all fields of t1.stuff would be populated, because no empty field in t1.stuff is left over.
Here is an example what the tables look like:
t1:
|__a___|_b_|_c_|stuff|...|
|___308|foo|bar|_____|baz|
|___312|foo|bar|_____|baz|
...
|655578|foo|bar|_____|baz|
t2:
|___stuff___|
|some_info_1|
|some_info_2|
...
|some_info_n|
Maybe there are multiple steps required...
UPDATE
Here is the SOLUTION I went with in case someone has a similar problem - All credits go to user nurdglaw for pointing me in the right direction. So here we go:
Add a new column to your table in question populated with autoincrementing numbers (I set alter table t1 auto_increment = 1 and temporary disabled autoincrementing on my primary key, to avoid an error with this code) ALTER TABLE t1 ADD COLUMN new_column INTEGER UNIQUE AUTO_INCREMENT;
Did the same thing for t2. If you don't already have a second table, you can do something like this:
CREATE TABLE t2 (id INTEGER PRIMARY KEY AUTO_INCREMENT,t2_data_column VARCHAR(255)); <-- adjust number to your needs
and import your data with:
LOAD DATA LOCAL INFILE 'path_on_your_server/data_file.csv'
INTO TABLE t2
LINES TERMINATED BY '\r\n' <-- adjust to your linebreak needs
(t2_data_column)
Now that you have something to match against, you can INNER JOIN t1 with t2 by doing the following: Add the data from t2 to t1
UPDATE t1 AS s
JOIN t2 AS t ON t.id=s.new_column
SET s.stuff=t.t2_data_column; <-- stuff was the column in t1 I wanted to import the data to.
Tidy up the mess
DROP TABLE t2;
ALTER TABLE t1 DROP COLUMN new_column;
Enable autoincrement on your primary key again and set it to the number you need for new rows, if you used one before.
That is it, you're done!
One further note: I decided to adjust my data offline and import the 650.000 entries needed with this method in one go, rather than doing it with only the 60.000 I put in the initial question. But you'll get the idea of doing it with any number of data and match it with whatever you need.
INSERT statements create new rows in your table.
You need an UPDATE on the already existing rows
An easy way to do that is using an extern scripting language
; here is a rebol example
; assumming you use the mysql library from softinnov
; and a_ is the name of the unique key to a row in t1
db: open mysql://user:pass#mysql
insert db {select * from t1}
t1rows: copy db
insert db {select * from t2}
t2rows: copy db
foreach row t1rows [
insert db [ {update t1 set t1.stuff = ? where t1.a_ = ?} t2rows/1/1 row/1]
either tail? next t2rows [
t2rows: head t2rows
] [
t2rows: next t2rows
]
]
sorry, I still have difficulties with the formatting and the variables in your example
Try this
INSERT INTO t1 (stuff)
SELECT DISTINCT stuff FROM t2
I hope it helps

Update MySQL table from a local file

I have a table in a database, and I'd like to update a column which I have offline on a local file. The file itself has two columns
an ID which corresponds to an ID column in the table, and
the actual value.
I've been able to create new rows using
LOAD DATA INFILE 'file.txt' INTO TABLE table
FIELDS TERMINATED BY ','
But I'm not sure how I can specifically insert values in such a way that the ID column in the file is joined to the ID column in the table. Can someone help with the SQL syntax?
I suggest you load your data into a temporary table, then use an INSERT ... SELECT ... ON DUPLICATE KEY UPDATE; for example:
CREATE TEMPORARY TABLE temptable (
id INT UNSIGNED NOT NULL,
val INT,
PRIMARY KEY (id)
) ENGINE = MEMORY;
LOAD DATA LOCAL INFILE '/path/to/file.txt' INTO temptable FIELDS TERMINATED BY ',';
INSERT INTO my_table
SELECT id, val FROM temptable
ON DUPLICATE KEY UPDATE val = VALUES(val);
DROP TEMPORARY TABLE temptable;
Another way could be ...
Since you already know the table name as well have the ID and actual value ... what you can do is ... directly write the update statements in a file, like
update mytable set value_col = value where ID_col = ID;
Second Update Statement
Third Update statement
.......
Save the file as *.sql like, updatescript.sql and then execute that script directly like
mysql -h <hostname> -u root -p <your_db_name> < "E:/scripts/sql/updatescript.sql"
It depends of the no of rows ,
If it is in hundreds make a script of update column and run it , but if it is in large volume import that file in to a new table and update your table with a join , and then drop the table