I have a single procedure that has two insert statements in it for two different tables. I must insert data into table1 before I can insert into table2. I'm using PHP to do the data collection. What I'd like to know is how to insert multiple rows into table2, which can have many rows associated with table1. How would I do this?
I want to only store the person in table1 just one time but table2 requires multiple rows. If these insert statements were in separate procedures, I wouldn't have a problem but I just don't know how I would insert more than one row into table2 without table1 rejecting a second duplicate record.
BEGIN
INSERT INTO user(name, address, city) VALUES(Name, Address, City);
INSERT INTO order(order_id, desc) VALUES(OrderNo, Description);
END
I'd suggest you do it separately, otherwise you'd need a complicated solution which is prone to error if something changes.
The complicated solution is:
join all orderno and descriptions with a separator. (orderno#description)
join all orders with a different separator. (orderno#description/orderno#description/...)
pass it to the procedure
in the procedure, split the string by order separator, then loop through each of them
for each order, split the string by the first separator, then insert into the appropriate columns
As you can see, this is bad.
I am sorry, but what's stopping you from inserting data into these (seemingly unrelated) tables in separate queries? If you don't like the idea of it failing halfway through, you can wrap it into a transaction. I know, mysqli and pdo can do that just fine.
Answering your question directly, insert's ignore mode turns errors during insertion into warnings, so upon attempting to insert a duplicate row the warning is issued and the row is not inserted, but there is no error.
You could use the IGNORE keyword on the first statement.
http://dev.mysql.com/doc/refman/5.1/en/insert.html:
If you use the IGNORE keyword, errors that occur while executing the INSERT statement are treated as warnings instead. For example, without IGNORE, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table causes a duplicate-key error and the statement is aborted. With IGNORE, the row still is not inserted, but no error is issued.But somehow this seems rather inefficient to me, a "stabbed from behind through the chest in the eye"-solution.
Related
I have some words like ["happy","bad","terrible","awesome","happy","happy","horrible",.....,"love"].
These words are large in number, exceeding 100 ~ 200 maybe.
I want to saving that to DB at the same time.
I think calling to DB connection at every word is so wasteful.
What is the best way to save?
table structure
wordId userId word
You are right that executing repeated INSERT statements to insert rows one at a time i.e processing RBAR (row by agonizing row) can be expensive, and excruciatingly slow, in MySQL.
Assuming that you are inserting the string values ("words") into a column in a table, and each word will be inserted as a new row in the table... (and that's a whole lot of assumptions there...)
For example, a table like this:
CREATE TABLE mytable (mycol VARCHAR(50) NOT NULL PRIMARY KEY) ENGINE=InnoDB
You are right that running a separate INSERT statement for each row is expensive. MySQL provides an extension to the INSERT statement syntax which allows multiple rows to be inserted.
For example, this sequence:
INSERT IGNORE INTO mytable (mycol) VALUES ('happy');
INSERT IGNORE INTO mytable (mycol) VALUES ('bad');
INSERT IGNORE INTO mytable (mycol) VALUES ('terrible');
Can be emulated with single INSERT statement
INSERT IGNORE INTO mytable (mycol) VALUES ('happy'),('bad'),('terrible');
Each "row" to be inserted is enclosed in parens, just as it is in the regular INSERT statement. The trick is the comma separator between the rows.
The trouble with this comes in when there are constraint violations; either the whole statement succeeds or fails. Unlike the individual inserts, where one of them can fail and the other two succeed.
Also, be careful that the size (in bytes) of the statement does not exceed the max_allowed_packet variable setting.
Alternatively, a LOAD DATA statement is an even faster way to load rows into a table. But for a couple of hundred rows, it's not really going to be much faster. (If you were loading thousands and thousands of rows, the LOAD DATA statement could potentially be much faster.
It would be helpful to know you are generating that list of words but you could do
insert into table (column) values (word), (word2);
Without more info that is about as much as we can help
You could add a loop in whatever language is needed to iterate over the list to add them.
I'm trying to make 2 values unique, like if I have the values (5, 10) the same values can't be added again.
I'm currently selecting from the table the values x and y, checking if they both together exists on the table if they don't exists insert them, in other words
"Select * from location where x=? and y=?"
if no result is returned it will continue to insert the values.
This is typically accomplished by creating a unique index on both columns combined (a multi-column index).
Then, MySQL will prevent you from inserting duplicates. You can go ahead and try to insert the record, and if you get a duplicate key error, you know it already exists.
Alternatively, another way to handle it is to use INSERT IGNORE, so that no error occurs if you try to insert a duplicate row. Still, it won't insert, so you simply check the affected ROW_COUNT() to see if the insert was successful.
Using a unique index and catching the failure on the insert is more performant than selecting then trying to insert because in the case you do insert, MySQL only has to perform one search, rather than two.
I am inserting some words into a two-column table with this command:
INSERT IGNORE INTO terms (term) VALUES ('word1'), ('word2'), ('word3');
How can I get the ID (Primary Key) of the row in which each word is inserted. I mean returning a value like "55,56,57" after executing INSERT. Does MySQL have such a response?
The term column is UNIQUE. If a term already exists, MySQL will not insert it. Is it possible to return the reference for this duplication (i.e. the ID of the row in which the term exists)? A response like "55,12,56".
You get it via SELECT LAST_INSERT_ID(); or via having your framework/MySQL library (in whatever language) call mysql_insert_id().
That won't work. There you have to query the IDs after inserting.
Why not just:
SELECT ID
FROM terms
WHERE term IN ('word1', 'word2', 'word3')
First, to get the id just inserted, you can make something like :
SELECT LAST_INSERT_ID() ;
Care, this will work only after your last INSERT query and it will return the first ID only if you have a multiple insert!
Then, with the IGNORE option, I don't think that it is possible to get the lines that were not inserted. When you make an INSERT IGNORE, you just tell MySQL to ignore the lines that would have to create a duplicate entry.
If you don't put this option, the INSERT will be stopped and you will have the line concerned by the duplication.
I have a mysql db with several tables, let's call them Table1, Table2, etc. I have to make several calls to each of these tables
Which is most efficient,
a) Collecting all queries for each table in one message, then executing them separately, e.g.:
INSERT INTO TABLE1 VALUES (A,B);
INSERT INTO TABLE1 VALUES (A,B);
...execute
INSERT INTO TABLE2 VALUES (A,B);
INSERT INTO TABLE2 VALUES (A,B);
...execute
b) Collecting ALL queries in one long message(not in order of table), then executing this query, e.g:
INSERT INTO TABLE1 VALUES (A,B);
INSERT INTO TABLE2 VALUES (B,C);
INSERT INTO TABLE1 VALUES (B,A);
INSERT INTO TABLE3 VALUES (D,B);
c) Something else?
Currently I am doing it like option (b), but I am wondering if there is a better way.
(I am using jdbc to access the db, in a groovy script).
Thanks!
Third option - using prepared statements.
Without posting your code, you've made this a bit of a wild guess, but this blog post shows great performance improvements using the groovy Sql.withBatch method.
The code they show (which uses sqlite) is reproduced here for posterity:
Sql sql = Sql.newInstance("jdbc:sqlite:/home/ron/Desktop/test.db", "org.sqlite.JDBC")
sql.execute("create table dummyTable(number)")
sql.withBatch {stmt->
100.times {
stmt.addBatch("insert into dummyTable(number) values(${it})")
}
stmt.executeBatch()
}
which inserts the numbers 1 to 1000 into a table dummyTable
This will obviously need tweaking to work with your unknown code
Rather than looking at which is more efficient, first consider whether the tables are large and whether you need concurrency.
If they are (millions of records) then you may want to separate them on a statement to statement basis and give some time between each statement, so you will not lock the table for too long at a time.
If your table isn't that large or concurrency is not a problem, then by all means do whichever. You should look at the slow logs of the statements and see which statement is faster.
I have a table setup with a UNIQUE column called, for example, test. I want to insert a row with the column test only if there isn't already a row in the table with test. I know I could just do the INSERT query and it would throw up an error if it already existed (and wouldn't really cause any harm AFAIK), but is there a way to do this properly using only MySQL? I'm pretty sure it can be done with functions but I've never used those before and I think there's an easier way.
Sounds like a job for INSERT IGNORE:
If you use the IGNORE keyword, errors
that occur while executing the INSERT
statement are treated as warnings
instead. For example, without IGNORE,
a row that duplicates an existing
UNIQUE index or PRIMARY KEY value in
the table causes a duplicate-key error
and the statement is aborted. With
IGNORE, the row still is not inserted,
but no error is issued.
Something like this should work
INSERT INTO TABLE(column1,column2)
SELECT value1, value2
WHERE 1 NOT IN (SELECT 1 FROM TABLE WHERE test='test')
You can use the IGNORE keyword, like this:
INSERT IGNORE INTO table_name (test) VALUES('my_value');
From the MySQL documentation:
If you use the IGNORE keyword, errors that occur while executing the INSERT statement are treated as warnings instead. For example, without IGNORE, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table causes a duplicate-key error and the statement is aborted. With IGNORE, the row still is not inserted, but no error is issued.
If you want to update the existing row rather than ignore the duplicate update entirely, check out the ON DUPLICATE KEY UPDATE syntax.
INSERT IGNORE (v,b,g) VALUES(1.2.3)
will do nothing if you hit keys (primary or unique) but you should know your keys then.
or just as John said, with preselcted data