I want to execute multiple command to update a customer database, but I want this command to execute in a transaction, and when an error occurs in one command all changes would be rolled backed.
When I run the code in this example, if test2 table has existed the rollback has not worked and the inserted row exists in test table.
What am I doing wrong?
MySQL server is 5.1.
the engine of tables is Innodb.
code example:
set autocommit = 0;
drop procedure if EXISTS rollbacktest;
delimiter //
CREATE PROCEDURE rollbacktest ()
BEGIN
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION,SQLWARNING SET #x2 = 4;
SET autocommit = 0;
start transaction;
SET #x2 = 0;
insert into test(t)values (800);
CREATE TABLE `test2` (
`t` int(11) UNSIGNED NOT NULL
)
ENGINE=InnoDB
DEFAULT CHARACTER SET=utf8 COLLATE=utf8_persian_ci ;
if #x2 = 4 THEN ROLLBACK; else commit; end if;
END;
//
CALL rollbacktest()
Your problem is that you're doing DDL (CREATE TABLE), which cannot be done in a transaction, so it'll implicitly commit the stuff you've done before.
This will also be the case if you tro to do DROP TABLE, ALTER TABLE, or TRUNCATE TABLE, among other things. Essentially any statement that cannot be rolled back will cause existing transactions to be auto-COMMITed
If I remember correctly, the CREATE TABLE implicitly commits the transaction.
http://dev.mysql.com/doc/refman/5.1/en/implicit-commit.html
Related
I have a problem creating a trigger for a basic table that will check on insert if one of the values inserted is bigger than 3000 and replace it with 0. It throws this error:
Can't update table 'staff' in stored function/trigger because it is already used by statement which invoked this stored function/trigger
The structure of the table is very simple:
CREATE TABLE `staff` (
`ID` int(11) NOT NULL,
`NAZWISKO` varchar(50) DEFAULT NULL,
`PLACA` float DEFAULT NULL
)
And the trigger for it looks like this:
BEGIN
IF new.placa >= 3000 THEN
UPDATE staff SET new.placa = 0;
END IF;
END
I don't understand fully what occurs here, but I suspect some recursion, but I am quite new to the topic of triggers and I have lab coming, so I want to be prepared for it.
MySQL disallows triggers from doing UPDATE/INSERT/DELETE against the same table for which the trigger executed, because there is too great a chance of causing an infinite loop. That is, in UPDATE trigger, if you could UPDATE the same table, that would cause the UPDATE trigger to execute, which would UPDATE the same table, and so on and so on.
But I guess you only want to change the value of placa on the same row being handled by the trigger. If so, just SET it:
BEGIN
IF new.placa >= 3000 THEN
SET new.placa = 0;
END IF;
END
Remember that you must use a BEFORE trigger when changing column values.
I am trying to avoid deletion of more than 1 row at a time in MySQL by using a BEFORE DELETE trigger.
The sample table and trigger are as below.
Table test:
DROP TABLE IF EXISTS `test`;
CREATE TABLE `test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`a` int(11) NOT NULL,
`b` int(11) NOT NULL,
PRIMARY KEY (`id`));
INSERT INTO `test` (`id`, `a`, `b`)
VALUES (1, 1, 2);
INSERT INTO `test` (`id`, `a`, `b`)
VALUES (2, 3, 4);
Trigger:
DELIMITER //
DROP TRIGGER IF EXISTS prevent_multiple_deletion;
CREATE TRIGGER prevent_multiple_deletion
BEFORE DELETE ON test
FOR EACH STATEMENT
BEGIN
IF(ROW_COUNT()>=2) THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Cannot delete more than one order per time!';
END IF;
END //
DELIMITER ;
This is still allowing multiple rows deletion. Even if I change the IF to >= 1, still allows the operation.
I my idea is to avoid operations such as:
DELETE FROM `test` WHERE `id`< 5;
Can you help me? I know that the current version of MySQL doesn't allow FOR EACH STATEMENT triggers.
Thank you!
Firstly, getting some syntax error(s) out of our way, from your original attempt:
Instead of FOR EACH STATEMENT, it should be FOR EACH ROW.
Since you have already defined the Delimiter to //; you need to use // (instead of ;) in the DROP TRIGGER IF EXISTS .. statement.
Row_Count() will have 0 value in a Before Delete Trigger, as no rows have been updated yet. So this approach will not work.
Now, the trick here is to use Session-level Accessible (and Persistent) user-defined variables. We can define a variable, let's say #rows_being_deleted, and later check whether it is already defined or not.
For Each Row runs the same set of statements for every row being deleted. So, we will just check whether the session variable already exists or not. If it does not, we can define it. So basically, for the first row (being deleted), it will get defined, which will persist as long as the session is there.
Now if there are more rows to be deleted, Trigger would be running the same set of statements for the remaining rows. In the second row, the previously defined variable would be found now, and we can simply throw an exception now.
Note that there is a chance that within the same session, multiple delete statements may get triggered. So before throwing exception, we need to set the #rows_being_deleted value back to null.
Following will work:
DELIMITER //
DROP TRIGGER IF EXISTS prevent_multiple_deletion //
CREATE TRIGGER prevent_multiple_deletion
BEFORE DELETE ON `test`
FOR EACH ROW
BEGIN
-- check if the variable is already defined or not
IF( #rows_being_deleted IS NULL ) THEN
SET #rows_being_deleted = 1; -- set its value
ELSE -- it already exists and we are in next "row"
-- just for testing to check the row count
-- SET #rows_being_deleted = #rows_being_deleted + 1;
-- We have to reset it to null, as within same session
-- another delete statement may be triggered.
SET #rows_being_deleted = NULL;
-- throw exception
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Cannot delete more than one order per time!';
END IF;
END //
DELIMITER ;
DB Fiddle Demo 1: Trying to delete more than row.
DELETE FROM `test` WHERE `id`< 5;
Result:
Query Error: Error: ER_SIGNAL_EXCEPTION: Cannot delete more than one
order per time!
DB Fiddle Demo 2: Trying to delete only one row
Query #1
DELETE FROM `test` WHERE `id` = 1;
Deletion successfully happened. We can check the remaining rows using Select.
Query #2
SELECT * FROM `test`;
| id | a | b |
| --- | --- | --- |
| 2 | 3 | 4 |
I need a sequential number sequence for the rows in a table and I need to ensure that it is always sequential with no gaps on insert , when deleted I can leave the row gap, but on insert I must fill the gaps with the new rows. The reason for this is a different system must line up one for one with the row records. Yet the db can be manipulated by others in both the sql end and also via an application ; I am thinking a trigger will allow me to accomplish the something changed part - but how to actually determine if I have gaps and perform the insert of this sequence number - even if I have to maintain the deleted sequences in a separate table and manage is fine - I am required to line up one for one with this other system no matter how the table gets manipulated .
Auto Increment field will not work as a row gets deleted the next insert will be the last auto Increment value. I would need an insert at .. or perhaps keep the row and add a field IsDeleted and force the table as read only or no more inserts / deletes ..but how to do that?
Perhaps when row is inserted I could set sequence number at gap if found or at end if not.
Does somebody have experience doing this kind of thing ?
I know there is a lot here. I tried to document it rather well inside the code and here and there. It uses Stored Procedures. You can naturally pull the code out and not use that method. It uses a main table that houses next available incrementors. It uses safe INNODB Intention Locks for concurrency. It has a reuse table and stored procs to support it.
It does not in anyway use the table myTable. It is shown there for your own imagination based on comments under your question. The summary of that is that you know that you will have gaps upon DELETE. You want some orderly fashion to reuse those slots, those sequence numbers. So, when you DELETE a row, use the stored procs accordingly to add that number. Naturally there is a stored proc to get the next sequence number for reuse and other things.
For the purposes of testing, your sectionType = 'devices'
And best of all it is tested!
Schema:
create table myTable
( -- your main table, the one you cherish
`id` int auto_increment primary key, -- ignore this
`seqNum` int not null, -- FOCUS ON THIS
`others` varchar(100) not null
) ENGINE=InnoDB;
create table reuseMe
( -- table for sequence numbers to reuse
`seqNum` int not null primary key, -- FOCUS ON THIS
`reused` int not null -- 0 upon entry, 1 when used up (reused)
-- the primary key enforces uniqueness
) ENGINE=InnoDB;;
CREATE TABLE `sequences` (
-- table of sequence numbers system-wide
-- this is the table that allocates the incrementors to you
`id` int NOT NULL AUTO_INCREMENT,
`sectionType` varchar(200) NOT NULL,
`nextSequence` int NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `sectionType` (`sectionType`)
) ENGINE=InnoDB;
INSERT sequences(sectionType,nextSequence) values ('devices',1); -- this is the focus
INSERT sequences(sectionType,nextSequence) values ('plutoSerialNum',1); -- not this
INSERT sequences(sectionType,nextSequence) values ('nextOtherThing',1); -- not this
-- the other ones are conceptuals for multi-use of a sequence table
Stored Proc: uspGetNextSequence
DROP PROCEDURE IF EXISTS uspGetNextSequence;
DELIMITER $$
CREATE PROCEDURE uspGetNextSequence(p_sectionType varchar(200))
BEGIN
-- a stored proc to manage next sequence numbers handed to you.
-- driven by the simple concept of a name. So we call it a section type.
-- uses SAFE INNODB Intention Locks to support concurrency
DECLARE valToUse INT;
START TRANSACTION;
SELECT nextSequence into valToUse from sequences where sectionType=p_sectionType FOR UPDATE;
IF valToUse is null THEN
SET valToUse=-1;
END IF;
UPDATE sequences set nextSequence=nextSequence+1 where sectionType=p_sectionType;
COMMIT; -- get it and release INTENTION LOCK ASAP
SELECT valToUse as yourSeqNum; -- return as a 1 column, 1 row resultset
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspGetNextSequence('devices'); -- your section is 'devices'
After you call uspGetNextSequence() it is your RESPONSIBILITY to ensure that that sequence #
is either added into myTable (by confirming it), or that if it fails, you insert it into
the reuse table with a call to uspAddToReuseList(). Not all inserts succeed. Focus on this part.
Because with this code you cannot "put" it back into the sequences table because of
concurrency, other users, and the range already passed by. So, simply, if the insert fails,
put the number into reuseMe via uspAddToReuseList()
.
.
.
Stored Proc: uspAddToReuseList:
DROP PROCEDURE IF EXISTS uspAddToReuseList;
DELIMITER $$
CREATE PROCEDURE uspAddToReuseList(p_reuseNum INT)
BEGIN
-- a stored proc to insert a sequence num into the reuse list
-- marks it available for reuse (a status column called `reused`)
INSERT reuseMe(seqNum,reused) SELECT p_reuseNum,0; -- 0 means it is avail, 1 not
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspAddToReuseList(701); -- 701 needs to be reused
Stored Proc: uspGetOneToReuse:
DROP PROCEDURE IF EXISTS uspGetOneToReuse;
DELIMITER $$
CREATE PROCEDURE uspGetOneToReuse()
BEGIN
-- a stored proc to get an available sequence num for reuse
-- a return of -1 means there aren't any
-- the slot will be marked as reused, the row will remain
DECLARE retNum int; -- the seq number to return, to reuse, -1 means there isn't one
START TRANSACTION;
-- it is important that 0 or 1 rows hit the following condition
-- also note that FOR UPDATE is the innodb Intention Lock
-- The lock is for concurrency (multiple users at once)
SELECT seqNum INTO retNum
FROM reuseMe WHERE reused=0 ORDER BY seqNum LIMIT 1 FOR UPDATE;
IF retNum is null THEN
SET retNum=-1;
ELSE
UPDATE reuseMe SET reused=1 WHERE seqNum=retNum; -- slot used
END IF;
COMMIT; -- release INTENTION LOCK ASAP
SELECT retNum as yoursToReuse; -- >0 or -1 means there is none
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspGetOneToReuse();
Stored Proc: uspCleanReuseList:
DROP PROCEDURE IF EXISTS uspCleanReuseList;
DELIMITER $$
CREATE PROCEDURE uspCleanReuseList()
BEGIN
-- a stored proc to remove rows that have been successfully reused
DELETE FROM reuseMe where reused=1;
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspCleanReuseList();
Stored Proc: uspOoopsResetToAvail:
DROP PROCEDURE IF EXISTS uspOoopsResetToAvail;
DELIMITER $$
CREATE PROCEDURE uspOoopsResetToAvail(p_reuseNum INT)
BEGIN
-- a stored proc to deal with a reuse attempt (sent back to you)
-- that you need to reset the number as still available,
-- perhaps because of a failed INSERT when trying to reuse it
UPDATE reuseMe SET reused=0 WHERE seqNum=p_reuseNum;
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspOoopsResetToAvail(701);
Workflow ideas:
Let GNS mean a call to uspGetNextSequence().
Let RS mean Reuse Sequence via a call to uspGetOneToReuse()
When a new INSERTis desired, call RS:
A. If RS returns -1 then nothing is to be reused so call GNS which returns N. If you can successfully INSERT with myTable.seqNum=N with a confirm, you are done. If you cannot successfully INSERT it, then call uspAddToReuseList(N).
B. If RS returns > 0, note in your head that slot has reuseMe.reused=1, a good thing to remember. So it is assumed to be in the process of being successfully reused. Let's call that sequence number N. If you can successfully INSERT with myTable.seqNum=N with a confirm, you are done. If you cannot successfully INSERT it, then call uspOoopsResetToAvail(N).
When you deem it safe to call uspCleanReuseList() do so. Adding a DATETIME to the reuseMe table might be a good idea, denoting when a row from myTable was orignally deleting and causing the reuseMe row to get its original INSERT.
I am wondering if it is necessary to use locking in most likely concurrent environment and how in following case. Using MySQL database server with InnoDB engine
Let's say I have a table
CREATE TABLE `A` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
`m_id` INT NOT NULL, -- manual id
`name` VARCHAR(10)
) ENGINE=INNODB;
And the procedure
CREATE PROCEDURE `add_record`(IN _NAME VARCHAR(10))
BEGIN
DECLARE _m_id INT;
DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK;
START TRANSACTION;
SELECT (`m_id` + 1) INTO _m_id FROM `A` WHERE `id` = (SELECT MAX(`id`) FROM `A`);
INSERT INTO `A`(`m_id`, `name`) VALUES(_m_id, _NAME);
COMMIT;
END$$
Like you see the fact is that I am increasing m_id manually and concurrent transactions are most likely happening. I can't make my mind if database might become in inconsistent state. Also using FOR UPDATE and LOCK IN SHARE MODE has no point in this situation as transaction deals with new records and has nothing to do with updates on a specific row. Further LOCK TABLES are not allowed in stored procedures and is quite insufficient.
So, my question is how to avoid inconsistent state in marked scenario if it is possible to happen actually. Any advice will be grateful
transaction deals with new records and has nothing to do with updates on a specific row
Such a new record is known as a phantom:
phantom
A row that appears in the result set of a query, but not in the result set of an earlier query. For example, if a query is run twice within a transaction, and in the meantime, another transaction commits after inserting a new row or updating a row so that it matches the WHERE clause of the query.
This occurrence is known as a phantom read. It is harder to guard against than a non-repeatable read, because locking all the rows from the first query result set does not prevent the changes that cause the phantom to appear.
Among different isolation levels, phantom reads are prevented by the serializable read level, and allowed by the repeatable read, consistent read, and read uncommitted levels.
So to prevent phantoms from occurring on any statement, one can simply set the transaction isolation level to be SERIALIZABLE. InnoDB implements this using next-key locks, which not only locks the records that your queries match but also locks the gaps between those records.
The same can be accomplished on a per-statement basis by using locking reads, such as you describe in your question: LOCK IN SHARE MODE or FOR UPDATE (the former allows concurrent sessions to read the matching records while the lock is in place, whilst the latter does not).
First, a sequence table
CREATE TABLE m_id_sequence (
id integer primary key auto_increment
);
and then alter the procedure to get the next m_id from the sequence table
DELIMITER $$
CREATE PROCEDURE `add_record`(IN _NAME VARCHAR(10))
BEGIN
DECLARE _m_id INT;
DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK;
START TRANSACTION;
INSERT INTO m_id_sequence VALUES ();
SET _m_id = LAST_INSERT_ID();
INSERT INTO `A`(`m_id`, `name`) VALUES(_m_id, _NAME);
COMMIT;
END$$
DELIMITER ;
I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion].
Table definition:
CREATE TABLE table1(x int NOT NULL);
The trigger looks something like this:
CREATE TRIGGER t1 BEFORE INSERT ON table1
FOR EACH ROW
IF (condition) THEN
NEW.x = NULL;
END IF;
END;
I am guessing it could also be written as(untested):
CREATE TRIGGER t1 BEFORE INSERT ON table1
FOR EACH ROW
IF (condition) THEN
ROLLBACK;
END IF;
END;
But, this doesn't work:
CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK;
You are guaranteed that:
Your DB will always be MySQL
Table type will always be InnoDB
That NOT NULL column will always stay the way it is
Question: Do you see anything objectionable in the 1st method?
From the trigger documentation:
The trigger cannot use statements that explicitly or implicitly begin or end a transaction such as START TRANSACTION, COMMIT, or ROLLBACK.
Your second option couldn't be created. However:
Failure of a trigger causes the statement to fail, so trigger failure also causes rollback.
So Eric's suggestion to use a query that is guaranteed to result in an error is the next option. However, MySQL doesn't have the ability to raise custom errors -- you'll have false positives to deal with. Encapsulating inside a stored procedure won't be any better, due to the lack of custom error handling...
If we knew more detail about what your condition is, it's possible it could be dealt with via a constraint.
Update
I've confirmed that though MySQL has CHECK constraint syntax, it's not enforced by any engine. If you lock down access to a table, you could handle limitation logic in a stored procedure. The following trigger won't work, because it is referencing the table being inserted to:
CREATE TRIGGER t1 BEFORE INSERT ON table1
FOR EACH ROW
DECLARE num INT;
SET num = (SELECT COUNT(t.col)
FROM your_table t
WHERE t.col = NEW.col);
IF (num > 100) THEN
SET NEW.col = 1/0;
END IF;
END;
..results in MySQL error 1235.
Have you tried raising an error to force a rollback? For example:
CREATE TRIGGER t1 BEFORE INSERT ON table1
FOR EACH ROW
IF (condition) THEN
SELECT 1/0 FROM table1 LIMIT 1
END IF;
END;