I'm writing a MySQL stored procedure that does maintenance on large tables, and runs every night.
Unfortunately, due to reasons beyond my control, the running SP sometimes stops unexpectedly (due to "kill" by another admins or dropped connections).
Is there a way to "catch" those situations from within my SP and update a table (e.g. an activity log or maintenance audit table) when this happens?
I tried:
DECLARE EXIT HANDLER FOR SQLWARNING, SQLEXCEPTION
...and other specific ERRORs and SQLSTATEs:
DECLARE EXIT HANDLER FOR 1078, 1080, 1152, 1159, 1161, 1184, 1317, 3169, SQLSTATE '08S01'
but non of them seem to catch when aborting or killing.
DECLARE EXIT HANDLER is for times your query terminates normally, something like clicking on shutdown button in windows. But kill process by another admin is something like reset PC by hand or power outage, So there is no time to do anything.
A good way to find these situations is investigating MySQL logs. If you cannot access to logs, try to do that by yourself. For example create a log table (include id, sp_name, start_time,end_time) and insert a record every time SP starts. You can do it by inserting timestamp and SP name at the beginning of SP. At the end of SP, you can update this record by inserting a timestamp for end_time. Every record with a start_time and without end_time means the contained SP name is killed by others after start_time time.
Related
I normally set MySql Events as follows, where UpdateTable() is a stored procedure which changes the table data.
CREATE EVENT UpdateTable_Every1Mins -- Create and Event
ON SCHEDULE EVERY 1 MINUTE STARTS CURRENT_TIMESTAMP + INTERVAL 1 MINUTE
DO CALL UpdateTable();
Occasionally, the UpdateTable procedure is time consuming and can span up to 3 minutes. Since I've set the interval to 1 minute (as shown in the query above), how do I ensure that a new UpdateTable call is not triggerred when the current UpdateTable call is still running?
Thanks
You can't, directly.
Indirectly, sure.
Inside the procedure, before any other work:
IF GET_LOCK('my_lock_name',0) IS NOT TRUE THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'failed to obtain lock; not continuing';
END IF;
Right before the end of the procedure:
DO RELEASE_LOCK('my_lock_name');
This is a named lock. It doesn't lock rows or tables, it just locks the name 'my_lock_name' (a name you make up) in a global memory structure so that no other thread can lock the same name. The 0 means not to wait around for the lock but to immediately return false (0) if someone else holds the lock or null if an error occurs -- and IS NOT TRUE matches either null or 0. See GET_LOCK() in the manual. The above code works in MySQL Server 5.5 or later. Earlier versions support the locks, but do not support SIGNAL to halt execution.
Releasing the lock is not technically necessary with events, since it's released when the client thread terminates, and each event invocation is in its own thread, which terminates when the event invocation ends... but it's best practice.
SIGNAL throws an exception, which stops execution of the procedure and of this invocation of the event. The message_text is logged to the MySQL error log.
I am new to c# and sql server. I'am using SQL Server 2008 and I have 126 tables in my database.
There are 7 transaction tables on which insert/update query fires frequently as there are 30-40 users for my application.
I have written BEGIN TRAN before every query and COMMIT at the end of the query.
Now many-a-times I get error 'Timeout expired ...' when any random user tries to open a form or save some data.
I have written ROLLBACK in my triggers if the trigger throws an error.
But I could not identify on which table BEGIN TRAN has happened or which table is deadlocked.
I have made sure that my connection is proper and is open, then I'am getting this error too
and I couldn't identify from where it is comming.
Does anyone have any idea from where this 'Timeout expired' error is comming and can suggest me some way out?
Does any one have any idea from where this 'Timeout expired' error is coming and can suggest me some way out.
One of the possible reasons is that the transaction cannot acquire lock on some resource(table, row, ...).
In that case you may try to increase the LOCK_TIMEOUT or change the isolation level(if acceptable).
I would suggest reading this article.
Due to performance issues arising from row locking and a long running query in a trigger, I've opted to instead run the query as a stored procedure from a cron job every five minutes.
My problem is that I need to prevent the situation where the query takes longer than 5 minutes and collides with the next scheduled run of the stored procedure. Since I do run this query in transaction, ideally I'd just execute a rollback somewhere in the stored procedure once the five minutes were up. Is this possible?
Thanks.
The 'brute force' method would be to have a table with 'jobs'. Each row would have a start time and end time. Look for the most recent start time that doesn't have an end time. If your next job wants to start and there isn't an end time listed then kill the previous job.
You could even put the process id in there.
I ended up just noting the epoch time at the beginning of the procedure, then after each record I process I do a commit and calculate the time elapsed. If it's above a preset timeout value I BREAK and end the procedure.
The only possible failure would be if the logic (an insert) in the stored procedure's while loop takes too long, but this is unlikely. I declared some exit handlers to mitigate this possibility (they do ROLLBACKs).
Is there a way to execute low priority updates in MySQL, when using InnoDB?
I am running a very high load application where there may easily be literally thousands of users trying to concurrently update the same data records. This is mostly session-based statistical information, much of which could be ignored in case there is a wait time associated with the request. I'd like to be able to check whether some table/row is locked, and if so just not pass an update query to the server. Is that possible?
did you try setting low_priority_updates=1 in your my.cnf file? This should give select queries priority when an update or insert would otherwise lock the table.
If you say so that there is a time limit with the request you could use a Stored Procedure to skip certain updates
Something like this:
DELIMITER $$
DROP PROCEDURE IF EXISTS `updateStats` $$
CREATE PROCEDURE updateStats ()
BEGIN
DECLARE _B smallint(1) DEFAULT 0;
DECLARE _SECONDS INT DEFAULT 1;
-- http://dev.mysql.com/doc/refman/5.0/en/lock-tables-and-transactions.html
SELECT GET_LOCK('myLabel1',_SECONDS) INTO _B;
IF _B = 1 THEN
UPDATE table SET ........;
SLEEP(_SECONDS);
SELECT RELEASE_LOCK('myLabel1') INTO _B;
END IF;
END
This will make sure that if you got the Lock, that lasts for _SECONDS you make sure no other procedure runs the same code in that time frame.
The sleep is needed to keep the lock for 1 second (as if the SP terminates sooner, the lock is released)
You can also add an else node to the if, so it the stored procedure cannot update, to do custom code, like add to queue.
Suppose you want to write into the live table only in interval of 1 second to not load it too much, probably you are having a lot of indexes on it. On the else node you could update a second table that acts as a queue, and the queue is emptied in the IF true node, when you also make the update.
So your user application doesn't wait for the update to complete, and doesn't care if it doesn't complete, might this be a suitable context for using a background processing manager such as Gearman?
I was under the impression that all updates to a SQL server database are first added the T-Log before being applied to the underlying database. In the event of the server crashing, the restore process would rollback any uncommitted transactions. This I also assumed works with transactions, if a commit or rollback is not called the changes will not be made.
So I wanted to see the reaction of SQL server to transactions being cut short. i.e. transactional updates without a commit or rollback. What I found I don’t quite understand. Especially, how SQL server can allow this to happen.
I used the script below to insert rows into a table with a delay to give me enough time to stop the transaction before it reaches the commit or rollback. This I guess would simulate the client application timing out before the transaction completed.
Create Table MyTest (Comment varchar(20))
Go
Create Procedure MyProc
as
Begin Try
Begin Transaction
Insert Into MyTest Select 'My First Entry'
WaitFor Delay '00:00:02'
Insert Into MyTest Select 'My Second Entry'
WaitFor Delay '00:00:02'
Insert Into MyTest Select 'My Third Entry'
Commit Transaction
Return 0 -- success
End Try
Begin Catch
If (##trancount<>0) Rollback Transaction
Declare #err int, #err_msg varchar(max)
Select #err = error_number(), #err_msg = error_message()
Raiserror(#err_msg, 16,1)
Return #err
End Catch
If you run the script, depending on how quickly you stop the procedure, you will see that the first one or two inserts will remain in the table. Could someone explain why this would happen?
Select * From MyTest
I tested this on SQL 2008.
Correct, TXNs are written using "Write Ahead Logging". There are MSDB articles about it and how this interacts with commit/rollback/checkpoints etc
However, a command timout (or what you are doing simply stops code executing) and the TXN is never rolled back and locks released until the connection is closed (or done later separately). This is what SET XACT_ABORT is for
If you begin a transaction and do not commit it or roll it back, you will simply get a hanging transaction that is likely to block other users until something is done with the current transaction. SQL Server will not automatically commit or rollback a transaction on its own, simply because your code didn't do so. The transaction will stay in place and block other users until it's committed or rolled back.
Now, I can quite easily begin a transaction in my T-SQL code, not commit or roll it back, and do a Select statement and see that data that I just inserted or updated as long as the Select statement is using the same connection as my transaction. If I attempt to do a Select using a different transaction, I won't see the inserted or updated data. In fact, the Select statement might not finish at all until the transaction on the other connection is completed.