Does insertion in mysql happen parallely or sequentially? - mysql

I'm using mysql with InnoDB engine, and I have created one trigger for BEFORE INSERT.
I want to be sure that if two insert queries are fired at the same time then will both trigger work in parallel or sequentially?
I have added sleep in trigger and fired two insert queries, and from the execution time it looks like second trigger is waiting for first one to finish.

For InnoDB in MySql, insertions happen in parallel, in my trigger I was using the update for a different table and when the same row was getting updated then in that case insertion was happening in sequential, and when different rows were getting updated then insertion was happening parallelly.
Read this post for more details: https://stackoverflow.com/a/32382959/9599500

Related

MySQL Insert with value from previous row / MySQL triggers fire multiple times, how to limit them

I've created a log trigger that runs every time my other table is being updated, it creates a row with log informations. Unfortunately the system I am working on performs one operation as multiple queries so my trigger is fired much more times than I need. This is an output from one operation.
During parameter update the system firstly perform separate DELETE query for each row with matched parameter and product_id and then perform INSERT statement for every parameter that match product_id. Because of that the trigger is fired for every query with parameter (7 times per INSERT in this product, but there are cases with 100+ parameters per product)
So, I want to reduce it to one row per operation, the last row. In the future parameters will be updated by Webservice API so I am looking for a simpler solution than making events with JS and PHP. I thought about subtracting NOW() with the date from previous row (with the same product_id) and if the result is slightly different or the same I delete the previous row. I saw posts and articles with lag() but it seems that it's not working with INSERT INTO queries. If you have any suggestion, please, help
https://dev.mysql.com/doc/refman/8.0/en/faqs-triggers.html#faq-mysql-have-trigger-levels says:
Does MySQL 8.0 have statement-level or row-level triggers?
In MySQL 8.0, all triggers are FOR EACH ROW; that is, the trigger is
activated for each row that is inserted, updated, or deleted. MySQL
8.0 does not support triggers using FOR EACH STATEMENT.
It doesn't seem like a trigger of the type MySQL supports can work easily for your case. I recommend you simply execute your inserts, then execute the deletions you want to do from your application, not from triggers.

Unable to Iterate over rows being inserted during AFTER INSERT trigger - MySQL 5.6

I hope you can offer some words of wisdom on an issue i've been struggling with. I am using a MySQL 5.6 trigger to copy data during inserts into a separate table (not the one i'm inserting into).
I'm also modifying the data as it's being copied and need to compare rows within the insert to each over. Due to lack of support for "FOR EACH STATEMENT", i cannot act on the entire insert dataset while the transaction is still in progress. It appears i can only work on the current row as part of the supported FOR EACH ROW syntax.
Does anybody know a way to overcome this?
thanks!
UPDATE 18/01/18: #solarflare thanks for your answers, I looked into splitting the operation into an insert then a call to a stored procedure. It would work but it's not a path i want to go down as it breaks the atomicity of the operation. I tested the same code on PostgreSQL and it works fine.
It appears that when performing a bulk insert, an AFTER INSERT..FOR EACH ROW trigger in MySQL takes a snapshot of the table as it was before the bulk insert TXN started and allows you to query the snapshot but you cannot query the other rows of the insert (even if they have been inserted).
In postgresql this is not the case, as rows are inserted the trigger can see them, even if the transaction is not fully committed. Have you ever seen this / do you know is this a configurable param in MySQL or is it a design choice?

mysql triggers on multiple instances

I have a trigger which works after insert|update on one table(suppose table 1). The trigger insert & updates data on other tables(table 2 & 3).
But as the trigger does its insertion and updates the initial table(table 1) gets another data inserted. In that case will that data will be missed by the trigger or it will execute after finishing its engaged data on table 1?
if the trigger misses the data then what should I do?
if the trigger works for all the entry then it may run all the time and the time will increase along with the data. Will that effect the speed of the database server?
can I run same thread on multiple instances?
There will be no data lost in any case.
If the traffic is very high to the table1 then it will effect on the database server performance.

MySQL pause index rebuild on bulk INSERT without TRANSACTION

I have a lot of data to INSERT LOW_PRIORITY into a table. As the index is rebuilt every time a row is inserted, this takes a long time. I know I could use transactions, but this is a case where I don't want the whole set to fail if just one row fails.
Is there any way to get MySQL to stop rebuilding indices on a specific table until I tell it that it can resume?
Ideally, I would like to insert 1,000 rows or so, set the index do its thing, and then insert the next 1,000 rows.
I cannot use INSERT DELAYED as my table type is InnoDB. Otherwise, INSERT DELAYED would be perfect for me.
Not that it matters, but I am using PHP/PDO to access MySQL. Any advice you could give would be appreciated. Thanks!
ALTER TABLE tableName DISABLE KEYS
// perform inserts
ALTER TABLE tableName ENABLE KEYS
This disables updating of all non-unique indexes. The disadvantage is that those indexes won't be used for select queries as well.
You can however use multi-inserts (INSERT INTO table(...) VALUES(...),(...),(...) which will also update indexes in batches.
AFAIK, for those that use InnoDB tables, if you don't want indexes to be rebuilt after each INSERT, you must use transactions.
For example, for inserting a batch of 1000 rows, use the following SQL:
SET autocommit=0;
//Insert the rows one after the other, or using multi values inserts
COMMIT;
By disabling autocommit, a transaction will be started at the first INSERT. Then, the rows are inserted one after the other and at the end, the transaction is committed and the indexes are rebuilt.
If an error occurs during execution of one of the INSERT, the transaction is not rolled back but an error is reported to the client which has the choice of rolling back or continuing. Therefore, if you don't want the entire batch to be rolled back if one INSERT fails, you can log the INSERTs that failed and continue inserting the rows, and finally commit the transaction at the end.
However, take into account that wrapping the INSERTs in a transaction means you will not be able to see the inserted rows until the transaction is committed. It is possible to set the transaction isolation level for the SELECT to READ_UNCOMMITTED but as I've tested it, the rows are not visible when the SELECT happens very close to the INSERT. See my post.

Seeking an example of a procedure that uses row_count

I want to write a procedure that will handle the insert of data into 2 tables. If the insert should fail in either one then the whole procedure should fail. I've tried this many different ways and cannot get it to work. I've purposefully made my second insert fail but the data is inserted into the first table anyway.
I've tried to nest IF statements based on the rowcount but even though the data fails on the second insert, the data is still being inserted into the first table. I'm looking for a total number of 2 affected rows.
Can someone please show me how to handle multiple inserts and rollback if one of them fails? A short example would be nice.
If you are using InnoDB tables (or other compatible engine) you can use the Transaction feature of MySQL that allows you to do exactly what you want.
Basically you start the transaction
do the queries checking for the result
If every result is OK you call the CONMIT
else you call the ROLLBACK to void all the queries within the transaction.
You can read and article about with examples here.
HTH!
You could try turning autocommit off. It might be automatically committing your first insert even though you haven't explicitly committed the transaction that's been started:
SET autocommit = 0;
START TRANSACTION
......