Performance issue of ON DUPLICATE KEY UPDATE in mysql bulk Update - mysql

As Mysql doesn't Provide any Bulk Update query but we use the feature of ON DUPLICATE KEY UPDATE. Is it good to use the below query when we are updating in bulk if not then what are the performance issues of using the below query? Is there is any other way to bulk update in MySQL
INSERT into fruits(id, value) VALUES
(1, 'apple'), (2, 'orange'), (3, 'peach'),
(4, 'apple'), (5, 'orange'), (6, 'peach'),
(7, 'apple'), (8, 'orange'), (9, 'peach'), (10, 'apple')
ON DUPLICATE KEY UPDATE value = VALUES(value);

Clever trick. Let us know if it is faster than a 10 UPDATE statements. I suspect it is -- 9 fewer round trips to server; 9 fewer calls to parser; etc.
There is REPLACE, but that is very likely to be less efficient, since it is
DELETE all rows that match any UNIQUE index; and
INSERT the row(s) given.
IODKU is effectively
if row exists (based on any UNIQUE key)
then do "update"
else do "insert"
The effort to check if the row exists pulls the necessary blocks into cache, thereby priming things for the update or insert.

Related

Insert into on duplicate key query for custom update on each row

I can insert/update data based on the following query on a MySQL server.
INSERT INTO users (user_id, books)
VALUES
(1, “book1, book2”),
(2, “book3, book4”)
ON DUPLICATE KEY UPDATE books=“book1, book2”;
However, is there a way to set multiple books column values for each row after the UPDATE statement? Something like below but that works :)
INSERT INTO users (user_id, books)
VALUES
(1, “book1, book2”),
(2, “book3, book4”)
ON DUPLICATE KEY UPDATE books
VALUES (“book1, book2”), (“book3, book4”);
If this is not the right approach for this purpose, how should I structure such queries?
Many thanks for any guidance in advance,
Doug
I assume that the duplicate key is the column user_id.
You can use a CASE expression:
INSERT INTO users (user_id, books) VALUES
(1, 'book1, book2'),
(2, 'book3, book4')
ON DUPLICATE KEY UPDATE
books = CASE user_id
WHEN 1 THEN 'book10, book20'
WHEN 2 THEN 'book30, book40'
END;
See a simplified demo.

MySql Insert multiple and Ignore duplicate records

I have this MySQL INSERT query that adds a Product to multiple categories:
INSERT INTO _categories_products (product_id, category_id) VALUES (1, 14), (1, 8), (1, 1), (1, 22);
This works great, however, if I add the same product to another subcategory from the same parent, it will create duplicate records for the parent categories:
INSERT INTO _categories_products (product_id, category_id) VALUES (1, 14), (1, 8), (1, 1), (1, 23);
Question: What would be the appropriate MySQL query that Ignores the insertion of duplicate records? In other words, the second query should INSERT only one record: 1, 23.
I tried also INSERT IGNORE INTO but nothing changed.
Thank You!
To start with, you want to create a unique constraint on categories/product tuples to avoid duplicates:
alter table _categories_products
add constraint _categories_products _bk
unique (product_id, category_id);
From that point on, an attempt to insert duplicates in the table would, by default, raise an error. You can trap and manage that error with MySQL on duplicate key syntax:
insert into _categories_products (product_id, category_id)
values (1, 14), (1, 8), (1, 1), (1, 23)
on duplicate key update product_id = values(product_id)
In the case of duplicate, the above query performs a dumb update on product_id, which actually turns the insert to a no-op.

How ignore error when the data is replaced in mySQL?

I insert data into the table as follows:
REPLACE INTO `test`(`id`,`text`) VALUES (1,'first'), (2, 'second'), (3, 'third')
But if one of the sets of data is incorrect, then all the other sets do not fall into the table.
REPLACE INTO `test`(`id`,`text`) VALUES (1,'new first'), (2, NULL), (3, 'new third')
How to achieve the following:
The first and third set of data is to replace the existing data in the table. And the second is to be ignored, and the data in the table should not change.
Try
insert ignore INTO `test`(`id`,`text`) VALUES (1,'new first'), (2, NULL), (3, 'new third')
insert ignore command work as replace command but it convert errors into warning.

How conditionally update a column that contains delimited values with ON DUPLICATE KEY UPDATE syntax

I am trying to update multiple records (about a thousand of them) using this single statement (this is a process that will run every night). The statement below only includes 3 products for simplicity:
INSERT INTO productinventory
(ProductID, VendorID, CustomerPrice, ProductOverrides)
VALUES
(123, 3, 100.00, 'CustomerPrice'),
(124, 3, 100.00, 'CustomerPrice'),
(125, 3, 100.00, 'CustomerPrice')
ON DUPLICATE KEY UPDATE
CustomerPrice = VALUES(CustomerPrice),
ProductOverrides = CONCAT_WS(',', ProductOverrides, 'CustomerPrice')
;
Everything works fine except that the ProductOverrides column gets the text 'CustomerPrice' added to it every time this statement runs, so it ends up looking like this after it runs twice:
CustomerPrice,CustomerPrice
What I want the statement to do is to add 'CustomerPrice' to the ProductOverrides column, but only if that string does not already exist there. So that no matter how many times I run this statement, it only includes that string once. How do I modify this statement to achieve that?
You can do something like this
INSERT INTO productinventory (ProductID, VendorID, CustomerPrice, ProductOverrides)
VALUES
(123, 3, 100.00, 'CustomerPrice'),
(124, 3, 100.00, 'CustomerPrice'),
(125, 3, 100.00, 'CustomerPrice')
ON DUPLICATE KEY UPDATE
CustomerPrice = VALUES(CustomerPrice),
ProductOverrides = IF(FIND_IN_SET(VALUES(ProductOverrides), ProductOverrides) > 0,
ProductOverrides,
CONCAT_WS(',', ProductOverrides, VALUES(ProductOverrides)));
Here is SQLFiddle demo

How long should inserting 150K rows take in mysql?

So I am inserting dummy data into my application.
I have insert statements that look like this:
INSERT INTO `submission_tagged` (1, 4);
INSERT INTO `submission_tagged` (1, 6);
INSERT INTO `submission_tagged` (1, 11);
INSERT INTO `submission_tagged` (2, 6);
INSERT INTO `submission_tagged` (2, 15);
INSERT INTO `submission_tagged` (2, 19);
150,000 of them to be precise; The insertion seems to be taking it's time; but they are obviously rather simple inserts, so I am wondering How long I should expect this to take if it will take a while I will cancel the insert and change the dummy data script to generate bulk insert statements...
Local server; so other traffic.
You can insert multiple rows at once, like:
INSERT INTO `submission_tagged` (1, 4), (1, 6)...
But check out docs for you RDBMS how many records at once it can handle. Seems that 1000 will work. That'l be much faster than inserting single row per query
Try doing them as a single INSERT:
INSERT INTO `submission_tagged` VALUES (1, 4), (1, 6), (1, 11), ...