I have a trigger which works after insert|update on one table(suppose table 1). The trigger insert & updates data on other tables(table 2 & 3).
But as the trigger does its insertion and updates the initial table(table 1) gets another data inserted. In that case will that data will be missed by the trigger or it will execute after finishing its engaged data on table 1?
if the trigger misses the data then what should I do?
if the trigger works for all the entry then it may run all the time and the time will increase along with the data. Will that effect the speed of the database server?
can I run same thread on multiple instances?
There will be no data lost in any case.
If the traffic is very high to the table1 then it will effect on the database server performance.
Related
I'm using mysql with InnoDB engine, and I have created one trigger for BEFORE INSERT.
I want to be sure that if two insert queries are fired at the same time then will both trigger work in parallel or sequentially?
I have added sleep in trigger and fired two insert queries, and from the execution time it looks like second trigger is waiting for first one to finish.
For InnoDB in MySql, insertions happen in parallel, in my trigger I was using the update for a different table and when the same row was getting updated then in that case insertion was happening in sequential, and when different rows were getting updated then insertion was happening parallelly.
Read this post for more details: https://stackoverflow.com/a/32382959/9599500
I'm using an Aurora DB (ie MySQL version 5.6.10) as a queue, and I'm using a stored procedure to pull records out of a table in batches. The sproc works with the following steps...
Select the next batch of data into a temptable
Write the IDs from the records from the temp table into to a log table
Output the records
Once a record has been added to the log, the sproc won't select it again next time it's called, so multiple servers can call this sproc, and both deal with batches of data from the queue without stepping on each others toes.
The sproc runs in a fraction of a second, but my company is now spinning up servers automatically, and these cloned servers are calling the sproc at exactly the same time, and the result is the same records are being selected twice
Is there a way I can make this sproc be limited to one call at a time? Ideally, any additional calls should wait until the first call is finished, and then they can run
Unfortunately, I have very little experience working with MySQL, so I'm not really sure where to start. I'd much appreciate it if anyone could point me in the right direction
This is a job for MySQL table locking. Try something like this. (You didn't show us your queries so there's a lot of guesswork here.)
SET autocommit = 0;
LOCK TABLES logtable WRITE;
CREATE TEMPORARY TABLE temptable AS
SELECT whatever FROM whatevertable FOR UPDATE;
INSERT INTO logtable (id)
SELECT id FROM temptable;
COMMIT;
UNLOCK TABLES;
If more than one connection tries to run this sequence concurrently, one will wait for the other's UNLOCK TABLES; to proceed. You say your SP is quick, so probably nobody will notice the short wait.
Pro tip: When you have the same timed code running on lots of servers, it's best to put in a short random delay before running the job. That way the shared resources (like your MySQL database) won't get hammered by a whole lot of requests precisely timed to be simultaneous.
I hope you can offer some words of wisdom on an issue i've been struggling with. I am using a MySQL 5.6 trigger to copy data during inserts into a separate table (not the one i'm inserting into).
I'm also modifying the data as it's being copied and need to compare rows within the insert to each over. Due to lack of support for "FOR EACH STATEMENT", i cannot act on the entire insert dataset while the transaction is still in progress. It appears i can only work on the current row as part of the supported FOR EACH ROW syntax.
Does anybody know a way to overcome this?
thanks!
UPDATE 18/01/18: #solarflare thanks for your answers, I looked into splitting the operation into an insert then a call to a stored procedure. It would work but it's not a path i want to go down as it breaks the atomicity of the operation. I tested the same code on PostgreSQL and it works fine.
It appears that when performing a bulk insert, an AFTER INSERT..FOR EACH ROW trigger in MySQL takes a snapshot of the table as it was before the bulk insert TXN started and allows you to query the snapshot but you cannot query the other rows of the insert (even if they have been inserted).
In postgresql this is not the case, as rows are inserted the trigger can see them, even if the transaction is not fully committed. Have you ever seen this / do you know is this a configurable param in MySQL or is it a design choice?
My issue currently is as follows:
I have Table A, that copies items to a transfer table based on whether an Update, Insert or Delete transaction has occurred on Table A i.e.
Table A -> new insert
Trigger activates and inserts row into Transfer table with 2 other columns - DateQueried and QueryType (Where DateQueried is the date the trigger fired and QueryType is 'Delete', 'Insert' or 'Update' depending on the trigger type)
However, now I need to transfer this data to a web server by a linked table (all this is fine and doing as it should). Currently I have a PowerShell script to do this. The script does the following:
Select all values from my transfer table, orders by datequeried
Run a foreach loop that runs a stored procedure to either insert / update / delete that value to the web server, depending on the value of QueryType.
This method is extremely slow, the script runs on a 30 minute timer and we can have a situation where we receive over 100,000 rows within that 30 minute time frame, which means 100,000 connections to the DB via the PowerShell script (especially when there's 15 tables that need to run through this process).
Is there a better way to get these values out by running an inner join? Previously I would just run the entire table at once through a stored procedure that would delete all values from the second server with a QueryType of delete, then run inserts then updates. However, this had some issues if a person was to create a new job, then delete the job, then recreate the job, then update the job, then delete the job again. My stored procedure would process all deletes, THEN all inserts, THEN all updates, so even though the row was deleted it would go and insert it back again.
I then rejigged it yet again and instead transferred primary keys across only and whenever I ran the stored procedure it would process deletes based on primary keys, then for inserts and updates it would first join to the original table on the primary keys (which if it was previously deleted would return no results and therefore not insert). But I ran into a problem where the query was chewing up way too much resources for the process and bombing out the server at times (it had to attempt to join > 100,000 results to a table that has over 10 million rows). Also there was another issue where it would insert a row with only null values for each column where the join wouldn't work. Then when it happened again there would be a primary key error and the stored procedure would stop.
Is there another option I am overlooking that would make the process here a great deal faster or am I just stuck with the limitations on the server and maybe have to suggest that the company only processes uploads at the end of every day rather than the 30 minute schedule they would like?
Stick with bulk Delete/Insert/update order.
But:
Only insert rows where a later delete is not there (all
changes are lost anyway)
Only process updates where a later insert OR delete rows aren't there (all
changes would be overwritten)
I have 2 Databases
Database 1,
Database 2
Now Each Database has Table say Table 1(IN DATABASE 1) and Table 2(IN DATABASE 2).
Table 1 is Basically a Copy of Table 2(Just for Backup).
How can i Sync Table 2 if Table 1 is Updated?
I am using MYSQL,Storage Engine:InnoDBand in back-end programming i am using php.
Further i can check for update after every 15 minutes using php script but it takes too much time because each table has51000 rows.
So, How can i achieve something like if Administrator/Superuser updates table 1, that update should me immediately updated in Table 2.
Also, is there a way where Bi-Directional Update can work i.e Both can be Masters?
Instead Table 1 as the only master, Both Table 1 and Table 2 can be Master's? if any update is done at Any of the tables other one should update accordingly?
If not wrong, what you are looking for is Replication which does this exact thing for you. If you configure a Transnational Replication then every DML operation will get cascaded automatically to the mirrored DB. So, no need for you to do continuously polling from your application.
Quoted from MySQL Replication document
Replication enables data from one MySQL database server (the master)
to be replicated to one or more MySQL database servers (the slaves).
Replication is asynchronous - slaves need not be connected permanently
to receive updates from the master. This means that updates can occur
over long-distance connections and even over temporary or intermittent
connections such as a dial-up service. Depending on the configuration,
you can replicate all databases, selected databases, or even selected
tables within a database.
Per your comment, Yes Bi-Directional Replication can also be configured.
See Configuring Bi-Directional Replication
As Rahul stated, what you are looking for is replication.
The standard replication of mysql is master -> slave which means that one of the databases is "master", the rest slaves. All changes must be written to the master db and will then be copied to the slaves. More info can be found in the mysql documentation on replication.
There is aslo an excellent guide on the digitaloceans community forums on master <-> master replication setup.
If the requirements for "Administrator/Superuser" weren't in your question, you could use the mysql's Replication functions on the databases.
If you want the data to be synced immediately to the Table2 upon inserting in Table1, you could use a trigger on the table. In that trigger you can check which user (if you have a column in that table specifying which user inserted the data) submitted data. If the user is an admin, configure the trigger to duplicate the data, if the user is a normal user, don't do anything.
Next for normal users entering data, you could keep an counter on each row, increasing by 1 if it's a new 'normal' user's data. Again in the same trigger, you could also check for what number the counter already is. Let's say if you reach 10, then duplicate all the rows to the other table and reset the counter + remove the old counter values from the just-duplicated-rows.