Probably a noob question and there are workaround, but just to know if any SQL expert can provide a better solution for this:
We know about this query:
Insert INTO table1 (column1, column2)
Select column1, column2
FROM table2
But i was wondering if there is a way to insert into multiple table using this query? As i've a select statement that provide a table of data which i needed to insert into multiple table. This option is purely for data migration case, and i don't want to use cursor. So any alternatives?
No, you cannot insert records into multiple tables inside one query.
What you can do though is to insert data into a temporary table first. Then you can insert into multiple tables from your temp table (inserting into one table at the time). This way you won't have to select the data multiple times.
Related
I have a case where I'm doing two queries: query1 is a bulk INSERT ... ON DUPLICATE KEY UPDATE on table1. For query2, I want to do another bulk INSERT on table2 with some application data along with using the ids inserted/updated from query1. I know I can do this with an intermediate query, selecting the ids I need from table1 and then inserting them into table2 along with application data, but I really want to avoid the extra network back-and-forth of that query along with the db overhead. Is there any way I can either get the ids inserted/updated from query1 when running that, or do some kind of complex, but relatively less expensive INSERT ... SELECT FROM in query2 to avoid this?
As far as I know, getting ids added/modified returned from query1 is impossible without a separate query, and I can't think of a way to batch INSERT ... SELECT FROM where the insertion values for each row are dependent on the selected value, but I'd love to be proven wrong, or shown a way around either of those.
There is no way to get a set of IDs as a result of a bulk INSERT.
One option you have is indeed to run a SELECT query to get the IDs and use them in the second bulk INSERT. But that's a hassle.
Another option is to run the 2nd bulk INSERT into a temporary table, let's call it table3, then use INSERT INTO table2 ... SELECT FROM ... table1 JOIN table3 ...
With a similar use case we eventually found that this is the fastest option, given that you index table3 correctly.
Note that in this case you don't have a SELECT that you need to loop over in your code, which is nice.
There are 2 tables in the same database with the same structure. I want to copy all data from one table to the other table using mySQL. The source table may have the same, less or more number of rows of the destination table.
I tried searching. I found 2 approaches:
Approach #1
TRUNCATE destination;
INSERT INTO destination SELECT * FROM source
Approach #2
DROP TABLE destination;
CREATE TABLE destination SELECT * FROM source
Isn't there any other approach involving UPDATE?
Update I don't think so.
You can do Insert
Insert into destination
(
column_1,
column_2,
....
)
SELECT
column_1,
column_2,
....
FROM source
Note: No. of columns mention in destination = No. of columns mention in source
By the approach #1 will not work always.
and approach #2 will always work
I am currently working on a webbased systen using a Mysql db.
I realised that I had initially set up the columns within the tables incorrectly and
I now need to move the data from one table column (receiptno) in table (clients) into a similar table column(receiptno) in table (revenue).
I am still quite inexperienced with Mysql and therefore I dont know the the mysql syntax to accomplish this.
Can I get some help on it.
Thanks
If you simply wanted to insert the data into new records within the revenue table:
INSERT INTO revenue (receiptno) SELECT receiptno FROM clients;
However, if you want to update existing records in the revenue table with the associated data from the clients table, you would have to join the tables and perform an UPDATE:
UPDATE revenue JOIN clients ON **join_condition_here**
SET revenue.receiptno = clients.receiptno;
Learn about SQL joins.
Same smell, different odor to eggyal's answer, this works in Oracle and Postgress so your mileage may vary.
UPDATE revenue t1 SET receiptno = (
SELECT receiptno FROM clients t2 WHERE t2.client_id = t1.revenue_id
);
You will have to adjust the where clause to suit your needs ...
INSERT INTO newtable (field1, field2, field3)
SELECT filed1, field2, field3
FROM oldtable
I have a mysql db with several tables, let's call them Table1, Table2, etc. I have to make several calls to each of these tables
Which is most efficient,
a) Collecting all queries for each table in one message, then executing them separately, e.g.:
INSERT INTO TABLE1 VALUES (A,B);
INSERT INTO TABLE1 VALUES (A,B);
...execute
INSERT INTO TABLE2 VALUES (A,B);
INSERT INTO TABLE2 VALUES (A,B);
...execute
b) Collecting ALL queries in one long message(not in order of table), then executing this query, e.g:
INSERT INTO TABLE1 VALUES (A,B);
INSERT INTO TABLE2 VALUES (B,C);
INSERT INTO TABLE1 VALUES (B,A);
INSERT INTO TABLE3 VALUES (D,B);
c) Something else?
Currently I am doing it like option (b), but I am wondering if there is a better way.
(I am using jdbc to access the db, in a groovy script).
Thanks!
Third option - using prepared statements.
Without posting your code, you've made this a bit of a wild guess, but this blog post shows great performance improvements using the groovy Sql.withBatch method.
The code they show (which uses sqlite) is reproduced here for posterity:
Sql sql = Sql.newInstance("jdbc:sqlite:/home/ron/Desktop/test.db", "org.sqlite.JDBC")
sql.execute("create table dummyTable(number)")
sql.withBatch {stmt->
100.times {
stmt.addBatch("insert into dummyTable(number) values(${it})")
}
stmt.executeBatch()
}
which inserts the numbers 1 to 1000 into a table dummyTable
This will obviously need tweaking to work with your unknown code
Rather than looking at which is more efficient, first consider whether the tables are large and whether you need concurrency.
If they are (millions of records) then you may want to separate them on a statement to statement basis and give some time between each statement, so you will not lock the table for too long at a time.
If your table isn't that large or concurrency is not a problem, then by all means do whichever. You should look at the slow logs of the statements and see which statement is faster.
I want to bulk insert all rows from one table to another. I am confused on how to use Select with Insert. Is there a way that a new table is automatically created if it does not exist?
There are two ways to do this:
One is to INSERT INTO ... SELECT, which will insert the resultset of your query into an existing table with the same data structure as your query
INSERT INTO MyTable_Backup
SELECT * FROM MyTable
The other is to CREATE TABLE ... SELECT ..., which will create a new table based on the data structure of your query an insert the resultset.
CREATE TABLE MyTable_Backup
SELECT * FROM MyTable;
However one thing to note is that this will not match the indexes of the source table. If you need indexes, you need to add them manually.
trigger and/or select into are recommended here