Just that is the question: is possible to do a ROLLBACK in a MySQL trigger?
If answer is yes, then, please, explain how.
I've found out that this functionnality exists since MySQL 5.5 and does not work in earlier releases.
The trigger does no rollback or commit.
To initiate any rollback, you have to raise an exception. Thus your insert/update/delete command will abort.
The rollback or commit action has to be raised around your SQL command.
To raise your exception, in your XXX's trigger (eg.) :
create trigger Trigger_XXX_BeforeInsert before insert on XXX
for each row begin
if [Test]
then
SIGNAL sqlstate '45001' set message_text = "No way ! You cannot do this !";
end if ;
end ;
If the trigger raises an exception, that will abort the transaction, effectively rolling back. Will this work for you?
From: http://dev.mysql.com/doc/refman/5.1/en/trigger-syntax.html
The trigger cannot use statements that
explicitly or implicitly begin or end
a transaction such as START
TRANSACTION, COMMIT, or ROLLBACK.
and
For transactional tables, failure of a
statement should cause rollback of all
changes performed by the statement.
Failure of a trigger causes the
statement to fail, so trigger failure
also causes rollback. For
nontransactional tables, such rollback
cannot be done, so although the
statement fails, any changes performed
prior to the point of the error remain
in effect.
Related
As the title says, are stored procedures in MySQL atomic? i.e. would something like
for (..)
<check_if_row_has_flag>
for (..)
<update_row>
work atomically?
Interestingly, I couldn't find much about this on Google except one forum thread from 2009.
No, stored procedures are not atomic.
The pseudocode you show above has a race condition. The first loop, checking if a row has a flag, would return an answer, but unless you do a locking read, another concurrent session could change the flag immediately after your procedure reads the row.
This is the effect of optimistic locking. Rows are not locked until you issue a statement to lock them. So even within a transaction, you don't have atomic locking.
The atomicity that MySQL supports is for transaction commit. Transactions are atomic in that all changes made during the transaction succeed, or else all are rolled back. Other sessions cannot see your transaction in a partially-complete state.
Re the comments below:
You can call a procedure within a transaction from your app:
START TRANSACTION;
CALL MyProcedure();
COMMIT;
You can even start and commit a transaction (or multiple transactions serially), explicitly in the body of a procedure:
CREATE PROCEDURE MyProcedure()
BEGIN
START TRANSACTION;
...UPDATE, INSERT, DELETE, blah blah...
COMMIT;
END
But the procedure itself does not implicitly start or commit a transaction.
I have a program starting a transaction and doing a lot of things. In the end I commit. In case of error, I want to rollback and save the error.
I am doing the following:
1) ROLLBACK;
2) START TRANSACTION;
3) add the error to the database
4) COMMIT;
5) exit
Is this good idea or it's better to start a new connection and never do rollback or commit on the old one?
If I never commit, where are my data? What happens to them?
I want to have a stored procedure that does the following:
Locks a table
Checks for a value in it
Updates same table based on that value
Unlocks the table
If an error occurs between 1 and 4, will the table be unlocked? Or do I need to capture the error somehow? (how?)
Is there a better way to do this?
You can't lock a table within a stored procedure in MySQL.
SQL Statements Not Permitted in Stored Routines
Stored routines cannot contain arbitrary SQL statements. The following statements are not permitted:
The locking statements LOCK TABLES and UNLOCK TABLES.
— http://dev.mysql.com/doc/refman/5.6/en/stored-program-restrictions.html
If you are using InnoDB, then you can accomplish your purpose by locking the rows of interest using locking reads with SELECT ... FOR UPDATE. When you hit an error and roll back the transaction, the rows are unlocked automatically.
I wrote about this in detail in this recent answer where the question involved avoiding conflicting inserts but the underlying concept is the same whether you know the row you want already exists, or whether it might or might not exist.
Have you considered using transactions with a try-catch block? See this:
BEGIN TRAN
SAVE TRAN S1 -- Savepoint so any rollbacks will only affect this transaction
BEGIN TRY
/* Do your work in here */
END TRY
BEGIN CATCH
ROLLBACK TRAN S1 -- rollback just this transaction
SET #ErrorMessage = ERROR_MESSAGE()
SET #Severity = ERROR_SEVERITY()
SET #State = ERROR_STATE()
RAISERROR(#ErrorMessage, #Severity, #State) -- re-throw error if needed
END CATCH
Consider the following:
START TRANSACTION;
BEGIN;
INSERT INTO prp_property1 (module_name,environment_name,NAME,VALUE) VALUES ('','production','','300000');
/** Assume there is syntax error SQL here...**/
Blah blah blah
DELETE FROM prp_property1 WHERE environment_name = 'production';
COMMIT TRANSACTION;
Question:
I noticed that the transaction automatically rolls back and the record insert attempt fails.
If I don't provide a error handler or error check along with ROLLBACK TRANSACTION as above, is it safe as it seems to be doing the job in an example like above because the COMMIT TRANSACTION never gets executed?
I assume the transaction is rolled back immediately and discarded as soon as a error occurs.
No, transactions are not rolled back as soon as an error occurs. But you may be using a client-application which applies this policy.
For example, if you are using the mysql command-line client, then it normally stops executing when an error occurs and will quit. Quitting while a transaction is in progress does cause it to be rolled back.
When you are writing your own application, you can control the policy on rollback, but there are some exceptions:
Quitting (i.e. disconnecting from the database) always rolls back a transaction in progress
A deadlock or lock-wait timeout implicitly causes a rollback
Other than these conditions, if you invoke a command which generates an error, the error is returned as normal, and you are free to do whatever you like, including committing the transaction anyway.
Use Mysql stored procedure
BEGIN
DECLARE exit handler for sqlexception
BEGIN
ROLLBACK;
END;
DECLARE exit handler for sqlwarning
BEGIN
ROLLBACK;
END;
START TRANSACTION;
INSERT INTO prp_property1 (module_name,environment_name,NAME,VALUE) VALUES ('','production','','300000');
[ERROR]
COMMIT;
END
You can set if warning or error rollback, then you don't need delete, with transaction all entry is deleted.
You may use procedure to do this more effectively.
Transaction with Stored Procedure in MySQL Server
I would like to add to what #MarkR already said. Error Handling, assuming InnoDB engine, happens as described in the Mysql Server Documentation
If you run out of file space in a tablespace, a MySQL Table is full error occurs and InnoDB rolls back the SQL statement.
A transaction deadlock causes InnoDB to roll back the entire transaction.
A duplicate-key error rolls back the SQL statement
A row too long error rolls back the SQL statement.
Other errors are mostly detected by the MySQL layer of code (above the InnoDB storage engine level), and they roll back the corresponding SQL statement
My understanding is also that when the Mysql session ends (when the php scripts ends), anything that is not committed is rolled back. I yet have to find a really reliable source to back this statement so do not take my word for it.
I've tested these three situations; mySQL does not roll back automatically.
A transaction deadlock causes InnoDB to roll back the entire transaction.
A duplicate-key error rolls back the SQL statement
A row too long error rolls back the SQL statement.
Only the affected records fail, the rest of the records succeed unless your application calls "rollback" explicitly.
I was under the impression that all updates to a SQL server database are first added the T-Log before being applied to the underlying database. In the event of the server crashing, the restore process would rollback any uncommitted transactions. This I also assumed works with transactions, if a commit or rollback is not called the changes will not be made.
So I wanted to see the reaction of SQL server to transactions being cut short. i.e. transactional updates without a commit or rollback. What I found I don’t quite understand. Especially, how SQL server can allow this to happen.
I used the script below to insert rows into a table with a delay to give me enough time to stop the transaction before it reaches the commit or rollback. This I guess would simulate the client application timing out before the transaction completed.
Create Table MyTest (Comment varchar(20))
Go
Create Procedure MyProc
as
Begin Try
Begin Transaction
Insert Into MyTest Select 'My First Entry'
WaitFor Delay '00:00:02'
Insert Into MyTest Select 'My Second Entry'
WaitFor Delay '00:00:02'
Insert Into MyTest Select 'My Third Entry'
Commit Transaction
Return 0 -- success
End Try
Begin Catch
If (##trancount<>0) Rollback Transaction
Declare #err int, #err_msg varchar(max)
Select #err = error_number(), #err_msg = error_message()
Raiserror(#err_msg, 16,1)
Return #err
End Catch
If you run the script, depending on how quickly you stop the procedure, you will see that the first one or two inserts will remain in the table. Could someone explain why this would happen?
Select * From MyTest
I tested this on SQL 2008.
Correct, TXNs are written using "Write Ahead Logging". There are MSDB articles about it and how this interacts with commit/rollback/checkpoints etc
However, a command timout (or what you are doing simply stops code executing) and the TXN is never rolled back and locks released until the connection is closed (or done later separately). This is what SET XACT_ABORT is for
If you begin a transaction and do not commit it or roll it back, you will simply get a hanging transaction that is likely to block other users until something is done with the current transaction. SQL Server will not automatically commit or rollback a transaction on its own, simply because your code didn't do so. The transaction will stay in place and block other users until it's committed or rolled back.
Now, I can quite easily begin a transaction in my T-SQL code, not commit or roll it back, and do a Select statement and see that data that I just inserted or updated as long as the Select statement is using the same connection as my transaction. If I attempt to do a Select using a different transaction, I won't see the inserted or updated data. In fact, the Select statement might not finish at all until the transaction on the other connection is completed.