mysql update table if record not in temp table - mysql

Alright, I have multiple MySQL statements that lead into an issue I'm having updating a particular table. First let me show you my code, then I'll explain what I'm trying to do:
/*STEP 1 - create a temporary table to temporarily store the loaded csv*/
CREATE TEMPORARY TABLE IF NOT EXISTS `temptable1` LIKE `first60dayactivity`;
/*STEP 2. load the csv into the previously created temporary table*/
LOAD DATA LOCAL INFILE '/Users/me/Downloads/some.csv'
IGNORE INTO TABLE `{temptable}`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES
SET CUSTID = 1030,
CREATED = NOW(),
isactive = 1;
/*STEP 3. update first60dayactivity table changing isactive for records that are not in the temptable*/
UPDATE `first60dayactivity` fa
INNER JOIN `temptable1` temp
ON temp.`mid` = fa.`mid`
AND temp.`primarypartnername` = fa.`primarypartnername`
AND temp.`market` = fa.`market`
AND temp.`agedays` = fa.`agedays`
AND temp.`opendate` = fa.`opendate`
AND temp.`CUSTID` = fa.`CUSTID`
SET fa.isactive = IF( temp.`mid` IS NULL, 0, 1 );
/*STEP 4. insert the temp table records into the real table*/
.....blah blah blah.....
Ok, first create a temporary table so that we have a table to hold the imported .csv data. Next, import the .csv data into the temporary table (all this works perfectly so far).
Here is where I run into an issue. I'm wanting to update the isactive column of each record of the first60dayactivity table to 0 if the record is NOT found in temptable1 (after my import). Ultimately, I'm gathering a .csv, the .csv has the new live data that should be considered "active" and I need to set the old data to inactive. So, the update does an INNER JOIN to match on several column to see if the record is found in the temptable1, if it isn't then set the activity to 0, if it is found in temptable1 then ensure the activity status is 1.
The problem here is that all records in first60dayactivity are retaining the 1 property to indicate it is active. Nothing is getting updated to 0 even though I have proof new records exist within temptable1... Can someone tell me what I'm doing wrong in my query?
Thanks in advance!

temp.mid can never be NULL because you use this column in your join condition and you use an INNER JOIN.
Your join (without the insert) should return the matching rows. Using a LEFT JOIN for the update should do what I suppose you want to do.

Related

Import CSV to Update rows in table

There are approximately 26K products (posts) and each product has meta values like this:
The post_id column is the product id in db and the _sku (meta_key) is the unique id for each product.
I've received a new CSV file that updates all of the values (meta_value) for _sale_price (meta_key) of each product. The CSV file looks like:
SKU, Sale Price
How do I import this CSV to update only the _sale_price row based on the post_id (product id) & _sku value?
Output Example:
I know how to do this in PHP by looping through the CSV and selecting & executing an update for each single product but this seems inefficient.
Preferably with phpMyAdmin and by using LOAD DATA INFILE.
You can use temporary table to hold the update data and then run single update statement.
CREATE TEMPORARY TABLE temp_update_table (meta_key, meta_value)
LOAD DATA INFILE 'your_csv_pathname'
INTO TABLE temp_update_table FIELDS TERMINATED BY ';' (meta_key, meta_value);
UPDATE "table"
INNER JOIN temp_update_table on temp_update_table.meta_key = "table".meta_key
SET "table".meta_value = temp_update_table.meta_value;
DROP TEMPORARY TABLE temp_update_table;
If product_id is the unique column of that table, you can do that using CSV:
Have a CSV file of those you want to import with their unique ID. CSV file must be in same order of the table column, put all your columns and no column name
Then in phpMyAdmin, go to the table of database, click import
Select CSV in the drop-down of Format field
Make sure "Update data when duplicate keys found on import (add ON DUPLICATE KEY UPDATE)" is checked.
You can import the new data into another table (table2). Then update your primary table (table1) using a update with a sub-select:
UPDATE table1 t1 set
sale_price = (select meta_value from table2 t2 where t2.post_id = t1.product_id)
WHERE
(select count(*) from table2 t2 where t1.product_id = t2.post_id) > 0
This is obviously a simplification and you will most likely need to constrain your query a little further.
Make sure to backup your full database before attempting. I recommend you work on a non-production database until the process works flawlessly.
It seems to me that rAndom69's answer does not work on postgresql 12 but the join with the WHERE work:
UPDATE tableA
SET fieldToPopulateInTableA = temp_update_table.fieldPopulated
FROM temp_update_table
WHERE tableA.correspondingField = temp_update_table.correspondingField

Flagging records on large mysql file

We are currently importing very large CSV files into a mySQL data warehouse. A key part of the processing is to flag whether a record in the CSV file match an existing record in the warehouse. The "match" is done by comparing specific fields in the new data against the previous version of the table. If the record is "new" or if there have been updates, we want to add it to the warehouse.
At the moment the processing plan is as follows :
~ read CSV file into mySQL table A
~ is primary key on A on old-A? If it isnt set record status to "NEW"
~ if key is on old-A, issue update statement , JOINING old-A to A
~ if A.field1 = old-A.field1 OR A.field2 = A.old-A.field2 OR A.field3 = old-A.field3 THEN flag record status as "UPDATE"
~ process NEW or UPDATEd records according to record status
File-size on A and old-A is currently in the order of 50M records. We would expect new records to be 1M, updates to be 5-10M.
Although we are currently using MYSQL for this processing, I am wondering whether it would simply be better to do this using a scripting language? We are finding in particular that the step to flag the updates is very time consuming. Essentially we have an UPDATE statement that is unable to use any indexation.
so
CREATE TABLE A (key1 bigint,
field1 varchar(50),
field2 varchar(50),
field 3 varchar(50) );
LOAD DATA ...
... add field rec_status to table A
... then
UPDATE A
LEFT JOIN old-A ON A.key1 = old-A.key1
SET rec_status = 'NEW'
WHERE old-A.key1 = NULL;
UPDATE A
JOIN old-A ON A.key1 = old-A.key1
SET rec_status = 'UPDATED'
WHERE A.field1 <> old-A.field1
OR A.field2 <> old-A.field2
OR A.field3 <> old-A.field3;
...
I will consider skipping the "flag" step. Process the CSV file using script or MySql table A using MySQL statement, select a record from old-A table base on whatever criteria, such as field1, or/and field2... of table A, if found, lock and update old-A record, delete processed record from CSV or table A. If not found, create record in old-A with data.

mysql insert update LOAD DATA LOCAL INFILE

i am using LOAD DATA LOCAL INFILE to load data into temp table mid.then i use a update query to update found in table product.The only matching field in both is the model.
$q = "LOAD DATA LOCAL INFILE 'Mid.csv' INTO TABLE mid
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n' IGNORE 1 LINES
(#col1,#col2,#col3,#col4,#col5,#col6) set model=#col1,price=#col3,stock=#col6 ";
mysql_query($q, $db);
mysql_query('UPDATE mid m, products p set p.products_price= m.price,p.products_quantity= m.stock where p.products_model= m.model');
It works and update the product table.the issue i am having is that there new records in mid table which don't get inserted as i am using the update statement.
I have looked at the insert query and update on duplicate.I have seen loads of examples of when it has to work on one table but none where i have to match it against another table.
Either i am searching for the wrong thing or there is another way to to do this.
i would appreciate any help.
regards
naf
I'm not sure what the other columns in the product table are, but here's a basic approach that should work for you based on the 3 columns in your example, assuming the products_model column is unique in the products table:
insert into products (products_price,products_quantity,products_model)
select price, stock, model
from mid
on duplicate key update
products_price = values(products_price),
products_quantity = values(products_quantity)

Mysql single column table --> insert in other table

I have two tables: t1 and t2
- t2 has only 1 column named stuff (60.000 entries).
- t1 has 15 columns, including stuff (empty). t1 has about 650.000 entries.
How can I import the data from t2.stuff in t1.stuff when I have nothing to match it against? (I just want to populate empty fields of t1.stuff with data from t2.stuff and don't care about matching ids or anything.)
The best case (i think) would be, that if I run this query about 11 times, all fields of t1.stuff would be populated, because no empty field in t1.stuff is left over.
Here is an example what the tables look like:
t1:
|__a___|_b_|_c_|stuff|...|
|___308|foo|bar|_____|baz|
|___312|foo|bar|_____|baz|
...
|655578|foo|bar|_____|baz|
t2:
|___stuff___|
|some_info_1|
|some_info_2|
...
|some_info_n|
Maybe there are multiple steps required...
UPDATE
Here is the SOLUTION I went with in case someone has a similar problem - All credits go to user nurdglaw for pointing me in the right direction. So here we go:
Add a new column to your table in question populated with autoincrementing numbers (I set alter table t1 auto_increment = 1 and temporary disabled autoincrementing on my primary key, to avoid an error with this code) ALTER TABLE t1 ADD COLUMN new_column INTEGER UNIQUE AUTO_INCREMENT;
Did the same thing for t2. If you don't already have a second table, you can do something like this:
CREATE TABLE t2 (id INTEGER PRIMARY KEY AUTO_INCREMENT,t2_data_column VARCHAR(255)); <-- adjust number to your needs
and import your data with:
LOAD DATA LOCAL INFILE 'path_on_your_server/data_file.csv'
INTO TABLE t2
LINES TERMINATED BY '\r\n' <-- adjust to your linebreak needs
(t2_data_column)
Now that you have something to match against, you can INNER JOIN t1 with t2 by doing the following: Add the data from t2 to t1
UPDATE t1 AS s
JOIN t2 AS t ON t.id=s.new_column
SET s.stuff=t.t2_data_column; <-- stuff was the column in t1 I wanted to import the data to.
Tidy up the mess
DROP TABLE t2;
ALTER TABLE t1 DROP COLUMN new_column;
Enable autoincrement on your primary key again and set it to the number you need for new rows, if you used one before.
That is it, you're done!
One further note: I decided to adjust my data offline and import the 650.000 entries needed with this method in one go, rather than doing it with only the 60.000 I put in the initial question. But you'll get the idea of doing it with any number of data and match it with whatever you need.
INSERT statements create new rows in your table.
You need an UPDATE on the already existing rows
An easy way to do that is using an extern scripting language
; here is a rebol example
; assumming you use the mysql library from softinnov
; and a_ is the name of the unique key to a row in t1
db: open mysql://user:pass#mysql
insert db {select * from t1}
t1rows: copy db
insert db {select * from t2}
t2rows: copy db
foreach row t1rows [
insert db [ {update t1 set t1.stuff = ? where t1.a_ = ?} t2rows/1/1 row/1]
either tail? next t2rows [
t2rows: head t2rows
] [
t2rows: next t2rows
]
]
sorry, I still have difficulties with the formatting and the variables in your example
Try this
INSERT INTO t1 (stuff)
SELECT DISTINCT stuff FROM t2
I hope it helps

Mysql Load Data for existing column of a table

Initially I have uploaded Using load Data Infile row is having like 100000 Im Using Ubuntu
Example:data
ToneCode....Artist...MovieName...Language
1....................Mj..........Null........... English
3....................AB..........Null........... English
4....................CD.........Null........... English
5....................EF..........Null........... English
But Now I have To update Column MovieName Starting From ToneCode 1 till 100000 row I’m having data in .csv file to update .
Please suggest how to upload the .Csv file for existing table with data
I think the fastest way to do this, using purely MySQL and no extra scripting, would be as follows:
CREATE a temporary table, two columns ToneCode and MovieName same as in your target table
load the data from your new CSV file into that using LOAD DATA INFILE
UPDATE your target table using the INNER JOIN-like syntax that http://dev.mysql.com/doc/refman/5.1/en/update.html describes:
UPDATE items,month SET items.price=month.price WHERE items.id=month.id;
this would “join” the two tables items and month (by using just the “comma-syntax” for an INNER JOIN) using the id column as the join criterion, and update the items.price column with the value of the month.price column.
I Have found a solution as u Guys mentioned above
Soln: example
create table A(Id int primary Key, Name Varchar(20),Artist Varchar(20),MovieName Varchar(20));
Add all my 100000 row using
Load data infile '/Path/file.csv' into table tablename(A) fields terminated by ',' enclosed by'"'
lines terminated by '\n'
(Id,Name,Artist) here movie value is null
create temporary table TA(Id int primary Key,MovieName Varchar(20));
Uploaded data to temporary table TA
Load data infile '/Path/file.csv' into table tablename(A) fields terminated by ',' enclosed by'"'
lines terminated by '\n'(IDx,MovieName)
Now using join as u said
Update Tablename(TA),TableName(A) set A.MovieName=TA.MovieName Where A.Id=TA.Id