Mysql single column table --> insert in other table - mysql

I have two tables: t1 and t2
- t2 has only 1 column named stuff (60.000 entries).
- t1 has 15 columns, including stuff (empty). t1 has about 650.000 entries.
How can I import the data from t2.stuff in t1.stuff when I have nothing to match it against? (I just want to populate empty fields of t1.stuff with data from t2.stuff and don't care about matching ids or anything.)
The best case (i think) would be, that if I run this query about 11 times, all fields of t1.stuff would be populated, because no empty field in t1.stuff is left over.
Here is an example what the tables look like:
t1:
|__a___|_b_|_c_|stuff|...|
|___308|foo|bar|_____|baz|
|___312|foo|bar|_____|baz|
...
|655578|foo|bar|_____|baz|
t2:
|___stuff___|
|some_info_1|
|some_info_2|
...
|some_info_n|
Maybe there are multiple steps required...
UPDATE
Here is the SOLUTION I went with in case someone has a similar problem - All credits go to user nurdglaw for pointing me in the right direction. So here we go:
Add a new column to your table in question populated with autoincrementing numbers (I set alter table t1 auto_increment = 1 and temporary disabled autoincrementing on my primary key, to avoid an error with this code) ALTER TABLE t1 ADD COLUMN new_column INTEGER UNIQUE AUTO_INCREMENT;
Did the same thing for t2. If you don't already have a second table, you can do something like this:
CREATE TABLE t2 (id INTEGER PRIMARY KEY AUTO_INCREMENT,t2_data_column VARCHAR(255)); <-- adjust number to your needs
and import your data with:
LOAD DATA LOCAL INFILE 'path_on_your_server/data_file.csv'
INTO TABLE t2
LINES TERMINATED BY '\r\n' <-- adjust to your linebreak needs
(t2_data_column)
Now that you have something to match against, you can INNER JOIN t1 with t2 by doing the following: Add the data from t2 to t1
UPDATE t1 AS s
JOIN t2 AS t ON t.id=s.new_column
SET s.stuff=t.t2_data_column; <-- stuff was the column in t1 I wanted to import the data to.
Tidy up the mess
DROP TABLE t2;
ALTER TABLE t1 DROP COLUMN new_column;
Enable autoincrement on your primary key again and set it to the number you need for new rows, if you used one before.
That is it, you're done!
One further note: I decided to adjust my data offline and import the 650.000 entries needed with this method in one go, rather than doing it with only the 60.000 I put in the initial question. But you'll get the idea of doing it with any number of data and match it with whatever you need.

INSERT statements create new rows in your table.
You need an UPDATE on the already existing rows
An easy way to do that is using an extern scripting language
; here is a rebol example
; assumming you use the mysql library from softinnov
; and a_ is the name of the unique key to a row in t1
db: open mysql://user:pass#mysql
insert db {select * from t1}
t1rows: copy db
insert db {select * from t2}
t2rows: copy db
foreach row t1rows [
insert db [ {update t1 set t1.stuff = ? where t1.a_ = ?} t2rows/1/1 row/1]
either tail? next t2rows [
t2rows: head t2rows
] [
t2rows: next t2rows
]
]
sorry, I still have difficulties with the formatting and the variables in your example

Try this
INSERT INTO t1 (stuff)
SELECT DISTINCT stuff FROM t2
I hope it helps

Related

is there any way to automate one table data insertion depending on another table data in MySQL

don't know if it's possible or not, but I'm wondering if I can automate the insertion of MySQL table2 depending on table1
Let's say I've two tables ( table1 and table2 ) in my database. and what I want is an automation, which will automatically create a new row in table2 with some default values whenever a new row is created in table1. so that I don't have to write insertion code for table2 in my PHP file
Don't know if I've made it enough clear. let me brief my table structure in a nutshell...
table1 :-
user_name ( unique )
user_email
table2 :-
user_name ( same as table1 )
is_account_active ( true as default )
invested ( 0 as default )
current_balance( 0 as default )
so whenever an account is created, I'm inserting new data both in table1 and table2. so I'm wondering if I can create the table2 in a specific way that whenever a new row is created in table1, table2 will automatically pull user_name from table1's new row and insert it in its own storage ( the rest of the column data are static. so I can set defaults for 'em )
[ NOTE: I need to keep 'em in two different tables as there'll be many columns in each table ]
<-- I know many of you'll ask for what I tried. but tbh I don't even have any idea what to try -->
You can use a AFTER INSERT TRIGGER
DELIMITER $$
CREATE TRIGGER after_table1_insert
AFTER INSERT
ON table1 FOR EACH ROW
BEGIN
IF NEW.user_name IS NOT NULL AND NEW.user_email IS NOT NULL THEN
INSERT INTO table2 (user_name,is_account_active, invested, current_balance)
VALUES(NEW.user_name,1,0,0);
END IF;
END$$
DELIMITER ;
You need to add all columns that you want to use in table2 with NEW.column_name

Remove duplicate records in mysql

I have a table called leads with duplicate records
Leads:
*account_id
*campaign_id
I want to remove all the duplicate account_id where campaign_id equal to "51"
For example, if account_id = 1991 appears two times in the table then remove the one with campaign_id = "51" and keep the other one.
You could use a delete join:
DELETE t1
FROM yourTable t1
INNER JOIN yourTable t2
ON t2.account_id = t1.account_id AND
t2.campaign_id <> 51
WHERE
t1.campaign_id = 51;
There's no problem to delete from a table provided that:
You use the correct syntax.
You have done a backup of the table BEFORE you do any deleting.
However, I would suggest a different method:
Create a new table based on the existing table:
CREATE TABLE mytable_new LIKE mytable;
Add unique constraint (or PRIMARY KEY) on column(s) you don't want to have duplicates:
ALTER TABLE mytable_new ADD UNIQUE(column1,[column2]);
Note: if you want to identify a combination of two (or more) columns as unique, place all the column names in the UNIQUE() separated by comma. Maybe in your case, the constraint would be UNIQUE(account_id, campaign_id).
Insert data from original table to new table:
INSERT IGNORE INTO mytable_new SELECT * FROM mytable;
Note: the IGNORE will insert only non-duplicate values that match with the UNIQUE() constraint. If you have an app that runs a MySQL INSERT query to the table, you have to update the query by adding IGNORE.
Check data consistency and once you're satisfied, rename both tables:
RENAME TABLE mytable TO mytable_old;
RENAME TABLE mytable_new TO mytable;
The best thing about this is that in case that if you see anything wrong with the new table, you still have the original table.
Changing the name of the tables only take less than a second, the probable issue here is that it might take a while to do the INSERT IGNORE if you have a large data.
Demo fiddle
DELETE t1
FROM yourTable t1
INNER JOIN yourTable t2
ON t2.account_id = t1.account_id AND
t2.campaign_id <> 51
WHERE
t1.campaign_id = 51;

Store records in a new table created by a query in mysql

I have two tables ,location and locationdata. I want to query data from both the tables using join and to store the result in a new table(locationCreatedNew) which is not already present in the MySQL.Can I do this in MySQL?
SELECT location.id,locationdata.name INTO locationCreatedNew FROM
location RIGHT JOIN locationdata ON
location.id=locationdata.location_location_id;
Your sample code in OP is syntax in SQL Server, the counter part of that in MySQL is something like:
CREATE TABLE locationCreatedNew
SELECT * FROM location RIGHT JOIN locationdata
ON location.id=locationdata.location_location_id;
Referance: CREATE TABLE ... SELECT
For CREATE TABLE ... SELECT, the destination table does not preserve information about whether columns in the selected-from table are generated columns. The SELECT part of the statement cannot assign values to generated columns in the destination table.
Some conversion of data types might occur. For example, the AUTO_INCREMENT attribute is not preserved, and VARCHAR columns can become CHAR columns. Retrained attributes are NULL (or NOT NULL) and, for those columns that have them, CHARACTER SET, COLLATION, COMMENT, and the DEFAULT clause.
When creating a table with CREATE TABLE ... SELECT, make sure to alias any function calls or expressions in the query. If you do not, the CREATE statement might fail or result in undesirable column names.
CREATE TABLE newTbl
SELECT tbl1.clm, COUNT(tbl2.tbl1_id) AS number_of_recs_tbl2
FROM tbl1 LEFT JOIN tbl2 ON tbl1.id = tbl2.tbl1_id
GROUP BY tbl1.id;
NOTE: newTbl is the name of the new table you want to create. You can use SELECT * FROM othertable which is the query that returns the data the table should be created from.
You can also explicitly specify the data type for a column in the created table:
CREATE TABLE foo (a TINYINT NOT NULL) SELECT b+1 AS a FROM bar;
For CREATE TABLE ... SELECT, if IF NOT EXISTS is given and the target table exists, nothing is inserted into the destination table, and the statement is not logged.
To ensure that the binary log can be used to re-create the original tables, MySQL does not permit concurrent inserts during CREATE TABLE ... SELECT.
You cannot use FOR UPDATE as part of the SELECT in a statement such as CREATE TABLE new_table SELECT ... FROM old_table .... If you attempt to do so, the statement fails.
Please check it for more. Hope this help you.
Use Query like below.
create table new_tbl as
select col1, col2, col3 from old_tbl t1, old_tbl t2
where condition;

Import CSV to Update rows in table

There are approximately 26K products (posts) and each product has meta values like this:
The post_id column is the product id in db and the _sku (meta_key) is the unique id for each product.
I've received a new CSV file that updates all of the values (meta_value) for _sale_price (meta_key) of each product. The CSV file looks like:
SKU, Sale Price
How do I import this CSV to update only the _sale_price row based on the post_id (product id) & _sku value?
Output Example:
I know how to do this in PHP by looping through the CSV and selecting & executing an update for each single product but this seems inefficient.
Preferably with phpMyAdmin and by using LOAD DATA INFILE.
You can use temporary table to hold the update data and then run single update statement.
CREATE TEMPORARY TABLE temp_update_table (meta_key, meta_value)
LOAD DATA INFILE 'your_csv_pathname'
INTO TABLE temp_update_table FIELDS TERMINATED BY ';' (meta_key, meta_value);
UPDATE "table"
INNER JOIN temp_update_table on temp_update_table.meta_key = "table".meta_key
SET "table".meta_value = temp_update_table.meta_value;
DROP TEMPORARY TABLE temp_update_table;
If product_id is the unique column of that table, you can do that using CSV:
Have a CSV file of those you want to import with their unique ID. CSV file must be in same order of the table column, put all your columns and no column name
Then in phpMyAdmin, go to the table of database, click import
Select CSV in the drop-down of Format field
Make sure "Update data when duplicate keys found on import (add ON DUPLICATE KEY UPDATE)" is checked.
You can import the new data into another table (table2). Then update your primary table (table1) using a update with a sub-select:
UPDATE table1 t1 set
sale_price = (select meta_value from table2 t2 where t2.post_id = t1.product_id)
WHERE
(select count(*) from table2 t2 where t1.product_id = t2.post_id) > 0
This is obviously a simplification and you will most likely need to constrain your query a little further.
Make sure to backup your full database before attempting. I recommend you work on a non-production database until the process works flawlessly.
It seems to me that rAndom69's answer does not work on postgresql 12 but the join with the WHERE work:
UPDATE tableA
SET fieldToPopulateInTableA = temp_update_table.fieldPopulated
FROM temp_update_table
WHERE tableA.correspondingField = temp_update_table.correspondingField

Fast load data into file split to tables connect by id

Have a MySQL database using InnoDB and Foreign Keys...
I need to import 100MiB of data from a huge CSV file and split it into two tables and the records have to be like follows
Table1
id|data|data2
Table2
id|table1_id|data3
Where Table2.table1_id is a foreign key referencing Table1.id.
The MySQL sequence for one instance would look like this
Load file into a temporary table
After that do an insert from temporary table to the needed
Get the last insert ID
Do the last insert group using this reference id...
That is utterly slow...
How do I do this using file load into...? Any real ideas with high speed result?
You could temporarily add column data3 to Table1 (I also add a done column to distinguish records which originate from the CSV from those that already exist/originate from elsewhere):
ALTER TABLE Table1
ADD COLUMN data3 TEXT,
ADD COLUMN done BOOLEAN DEFAULT TRUE;
LOAD DATA
INFILE '/path/to/csv'
INTO TABLE Table1 (data, data2, data3)
SET done = FALSE;
INSERT
INTO Table2 (table1_id, data3)
SELECT (id, data3) FROM Table1 WHERE NOT done;
ALTER TABLE Table1
DROP COLUMN data3,
DROP COLUMN done;