Synchronizing stored procedures in mysql - mysql

I have two applications, both of them are using the same stored procedure in MySQL.
I would like this procedure to be synchronized, that is while one applications calls it, the other one has to wait.
Is there a way to do this without altering the codes of the applications (that is only modifying the stored procedure)?
Thanks,
krisy

You can absolutely do this within the stored procedure without changing your application code, but bear in mind that you're introducing locking issues and the possibility of timeouts.
Use GET_LOCK() and RELEASE_LOCK() to take care of the synchronization. Run GET_LOCK to perform the synchronization at the start of your stored procedure, and RELEASE_LOCK once you're done:
IF (GET_LOCK('lock_name_for_this_SP', 60)) THEN
.... body of SP
RELEASE_LOCK('lock_name_for_this_SP');
ELSE
.... lock timed out
END IF
You'll also need to take care that your application timeouts are longer than the lock timeout so you don't incur other problems.

Related

Reducing the number of PreparedStatements used by a spring boot application

I'm running into a issue with a spring boot application, in which I am getting the below error,
(conn=1126) Can't create more than max_prepared_stmt_count statements (current value: 16382)
Which seems to be hitting the ceiling of max_prepared_stmt_count in mysql. Increasing it as much as we want could be problematic as well as it could result in OOM killer issues like this.
I'm exploring if there are any ways to limit the creation of PreparedStatements in Spring boot.
One possible option that I can think of,
Avoiding lazy loading whenever possible would force Hibernate to fetch the data with lesser number of prepared statements thus avoiding the problem.
Cache the PreparedStatements created by hibernate.
If anyone solved this problem or with a deeper insight, please share your wisdom on solving this.
MySQL has no problem with many prepared statements if they are over time. Obviously some MySQL databases stay up and running for months at a time, serving prepared statements. The total count of prepared statements over those months can be limitless.
The problem is how many prepared statements are cached in the MySQL Server at any given moment.
It has long been a source of trouble for MySQL Server that it allocates a small amount of RAM in the server for each prepared statement, to store the compiled version of the prepared statement and allow it to be executed subsequently. But the server doesn't know if the client will execute it again, so it has to keep that memory allocation indefinitely. The server depends on the client to explicitly deallocate the prepared statement. This can become a problem if the client neglects to "close" its prepared statements. They become long-lived, and eventually all those accumulated data structures take too much RAM in the MySQL Server.
So the variable really should be named max_prepared_stmts_allocated, not max_prepared_stmt_count. Likewise the status variable Prepared_stmt_count should be Prepared_stmts_open or something like that.
To fix this in your case, I would make sure in the client code that you deallocate prepared statements promptly when you have no more need of them.
If you open a prepared statement using a try-with-resources block, it should automatically be closed at the end of the block.
Otherwise you should call stmt.close() explicitly when you're done with it.

Force sequential execution in stored procedure

We have a stored procedure that executes a number of selects and updates on various tables. This stored procedure is being called within an SSIS package. It is only called once. I can see in the execution plan (and in my trace) that the queries are executing in parallel. This isn't an issue when volume is low but when data volume is high, the statements are causing deadlocks and the package fails due to deadlock. I thought to Set the lock timeout in the SP but I'd prefer to force sequential execution of the updates if possible. Any help suggestions on how to force sequential execution within the stored procedure would be a great help.

What is the proper way to do multiple UPDATE statements?

I have a server which sends up to 20 UPDATE statements to a separate MySQL server every 3-5 seconds for a game. My question is, is it faster to concat them together(UPDATE;UPDATE;UPDATE). Is it faster to do them in a transaction then commit the transaction? Is it faster to just do each UPDATE individually?
Any insight would be appreciated!
It sort of depends on how the server connects. If the connection between the servers is persistent, you probably won't see a great deal of difference between concatenated statements or multiple separate statements.
However, if the execution involves establishing the connection, executing the SQL statement, then tearing down the connection, you will save a lot of resources on the database server by executing multiple statements at a time. The process of establishing the connection tends to be an expensive and time-consuming one, and has the added overhead of DNS resolution since the machines are separate.
It makes the most logical sense to me to establish the connection, begin a transaction, execute the statements individually, commit the transaction and disconnect from the database server. Whether you send all the UPDATE statements as a single concatenation or multiple individual statements is probably not going to make a big difference in this scenario, especially if this just involves regular communication between these two servers and you need not expect it to scale up with user load, for example.
The use of the transaction assumes that your 3-5 second periodic bursts of UPDATE statements are logically related somehow. If they are not interdependent, then you could skip the transaction saving some resources.
As with any question regarding performance, the best answer is if your current system is meeting your performance and scaling needs, you ought not pay too much attention to micro-optimizing it just yet.
It is always faster to wrap these UPDATEs into single transaction block.
Price for this is that if anything fails inside that block it would be that nothing happened at all - you will have to repeat your work again.
Aslo, keep in mind that transactions in MySQL only work when using InnoDB engine.

MYSQL: Parallel execution of multiple statements within a stored procedure?

I have a procedure (procedureA) that loops through a table and calls another procedure (procedureB) with variables derived from that table.
Each call to procedureB is independent of the last call.
When I run procedureA my system resources show a maximum CPU use of 50% (I assume that is 1 out of my 2 CPU cores).
However, if I open two instances of the mysql terminal and execute a query in both terminals, both CPU cores are used (CPU usage can reach close to 100%).
How can I achieve the same effect inside a stored procedure?
I want to do something like this:
BEGIN
CALL procedureB(var1); -> CPU CORE #1
SET var1 = var1+1;
CALL procedureB(var1); -> CPU CORE #2
END
I know its not going to be that easy...
Any tips?
Within MySQL, to get something done asynchronously you'd have to use an CREATE EVENT, but I'm not sure whether creating one is allowed within a stored procedure. (On a side note: async. inserts can of course be done with INSERT DELAYED, but that's 1 thread, period).
Normally, you are much better of having a couple of processes/workers/deamons which can be accessed asynchronously by you program and have their own database connection, but that of course won't be in the same procedure.
You can write your own daemon as a stored procedure, and schedule multiple copies of it to run at regular intervals, say every 5 minutes, 1 minute, 1 second, etc.
use get_lock() with N well defined lock names to abort the event execution if another copy of the event is still running, if you only want up to N parallel copies running at a time.
Use a "job table" to list the jobs to execute, with an ID column to identify the execution order. Be sure to use good transaction and lock practices of course - this is re-entrant programming, after all.
Each row can define a stored procedure to execute and possibly the parameters. You can even have multiple types of jobs, job tables, and worker events for different tasks.
Use PREPARE and EXECUTE with the CALL statement to dynamically call stored procedures whose names are stored in strings.
Then just add rows as needed to the job table, even inserting in big batches, and let your worker events process them as fast as they can.
I've done this before, in both Oracle and MySQL, and it works well. Be sure to handle errors and log them somewhere, as well as success, for that matter, for debugging and auditing, as well as performance tuning. N=#CPUs may not be the best fit, depending on your data and the types of jobs. I've seen N=2xCPUs work best for data-intensive tasks, where lots of parallel disk I/O is more important than computational power.

MySQL triggers + replication with multiple databases

I am running a couple of databases on MySQL 5.0.45 and am trying to get my legacy database to sync with a revised schema, so I can run both side by side. I am doing this by adding triggers to the new database but I am running into problems with replication. My set up is as follows.
Server "master"
Database "legacydb", replicates to server "slave".
Database "newdb", has triggers which update "legacydb" and no replication.
Server "slave"
Database "legacydb"
My updates to "newdb" run fine, and set off my triggers. They update "legacydb" on "master" server. However, the changes are not replicated down to the slaves. The MySQL docs say that for simplicity replication looks at the current database context (e.g. "SELECT DATABASE();" ) when deciding which queries to replicate rather than looking at the product of the query. My trigger is run from the context of database "newdb", so replication ignores the updates.
I have tried moving the update statement to a stored procedure in "legacydb". This works fine (i.e. data replicates to slave) when I connect to "master" and manually run "USE newdb; CALL legacydb.do_update('Foobar', 1, 2, 3, 4);". However, when this procedure is called from a trigger it does not replicate.
So far my thinking on how to fix this has been one of the following.
Force the trigger to set a new current database. This would be easiest, but I don't think this is possible. This is what I hoped to achieve with the stored procedure.
Replicate both databases, and have triggers in both master and slave. This would be possible, but a pain to set up.
Force the replication to pick up all changes to "legacydb", regardless of the current database context.
If replication runs at too high a level, it will never even see any updates run by my trigger, in which case no amount of hacking is going to achieve what I want.
Any help on how to achieve this would be greatly appreciated.
This may have something to do with it:
A stored function acquires table locks before executing, to avoid inconsistency in the binary log due to mismatch of the order in which statements execute and when they appear in the log. Statements that invoke a function are recorded rather than the statements executed within the function. Consequently, stored functions that update the same underlying tables do not execute in parallel.
In contrast, stored procedures do not acquire table-level locks. All statements executed within stored procedures are written to the binary log.
Additionally, there are a whole list of issues with Triggers:
http://dev.mysql.com/doc/refman/5.0/en/routine-restrictions.html