mysql - making a mechanism similar to Oracle's seqences - mysql

MySQL provides an automatic mechanism to increment record IDs. This is OK for many purposes, but I need to be able to use sequences as offered by ORACLE. Obviously, there is no point in creating a table for that purpose.
The solution SHOULD be simple:
1) Create a table to hosts all the needed sequences,
2) Create a function that increases the value of a specific sequence and returns the new value,
3) Create a function that returns the current value of a sequence.
In theory, it looks simple... BUT...
When increasing the value of a sequence (much the same as nextval in Oracle), you need to prevent other sessions to perform this operation (or even fetch the current value) till the updated is completed.
Two theoretical options:
a - Use an UPDATE statement that would return the new value in a single shot, or
b - Lock the table between the UPDATE and SELECT.
Unfortunately, it would appear that MySQL does not allow to lock tables within functions / procedures, and while trying to make the whole thing in a single statement (like UPDATE... RETURNING...) you must use #-type variables which survive the completion of the function/procedure.
Does anyone have an idea/working solution for this?
Thanks.

The following is a simple example with a FOR UPDATE intention lock. A row-level lock with the INNODB engine. The sample shows four rows for next available sequences that will not suffer from the well-known INNODB Gap Anomaly (the case where gaps occur after failed usage of an AUTO_INCREMENT).
Schema:
-- drop table if exists sequences;
create table sequences
( id int auto_increment primary key,
sectionType varchar(200) not null,
nextSequence int not null,
unique key(sectionType)
) ENGINE=InnoDB;
-- truncate table sequences;
insert sequences (sectionType,nextSequence) values
('Chassis',1),('Engine Block',1),('Brakes',1),('Carburetor',1);
Sample code:
START TRANSACTION; -- Line1
SELECT nextSequence into #mine_to_use from sequences where sectionType='Carburetor' FOR UPDATE; -- Line2
select #mine_to_use; -- Line3
UPDATE sequences set nextSequence=nextSequence+1 where sectionType='Carburetor'; -- Line4
COMMIT; -- Line5
Ideally you do not have a Line3 or bloaty code at all which would delay other clients on a Lock Wait. Meaning, get your next sequence to use, perform the update (the incrementing part), and COMMIT, ASAP.
The above in a stored procedure:
DROP PROCEDURE if exists getNextSequence;
DELIMITER $$
CREATE PROCEDURE getNextSequence(p_sectionType varchar(200),OUT p_YoursToUse int)
BEGIN
-- for flexibility, return the sequence number as both an OUT parameter and a single row resultset
START TRANSACTION;
SELECT nextSequence into #mine_to_use from sequences where sectionType=p_sectionType FOR UPDATE;
UPDATE sequences set nextSequence=nextSequence+1 where sectionType=p_sectionType;
COMMIT; -- get it and release INTENTION LOCK ASAP
set p_YoursToUse=#mine_to_use; -- set the OUT parameter
select #mine_to_use as yourSeqNum; -- also return as a 1 column, 1 row resultset
END$$
DELIMITER ;
Test:
set #myNum:= -1;
call getNextSequence('Carburetor',#myNum);
+------------+
| yourSeqNum |
+------------+
| 4 |
+------------+
select #myNum; -- 4
Modify the stored procedure accordingly for you needs, such as having only 1 of the 2 mechanisms for retrieving the sequence number (either the OUT parameter or the result set). In other words, it is easy to ditch the OUT parameter concept.
If you do not adhere to ASAP release of the LOCK (which obviously is not needed after the update), and proceed to perform time consuming code, prior to the release, then the following can occur after a timeout period for other clients awaiting a sequence number:
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting
transaction
Hopefully this is never an issue.
show variables where variable_name='innodb_lock_wait_timeout';
MySQL Manual Page for innodb_lock_wait_timeout.
On my system at the moment it has a value of 50 (seconds). A wait of more than a second or two is probably unbearable in most situations.
Also of interest during TRANSACTIONS is that section of the output from the following command:
SHOW ENGINE INNODB STATUS;

Related

LOAD DATA INFILE interrup

First MySQL command line:
use usersbase;
LOAD DATA LOCAL INFILE 'D:/base/users.txt'
INTO TABLE users
FIELDS TERMINATED BY ',';
Second:
use usersbase;
set session transaction isolation level read uncommitted;
select count(1) from users;
How to stop lodaing from file, if i see, that users table have n rows, and i dont need more? How to save current loaded rows, and stop loading?
Try this:
Use LOAD DATA INFILE .. IGNORE ...
Add temporary trigger to this table like
CREATE TRIGGER prevent_excess_lines_insertion
BEFORE INSERT
ON users
FOR EACH ROW
BEGIN
IF 50000 < (SELECT COUNT(*) FROM USERS) THEN
SET NEW.id = 1;
END IF;
END
When the line is loaded then the amount of rows in the table (except the line to be inserted) is counted and compared with pre-defined rows amount (50000).
If current rows amount is less then the row is inserted.
If predefined amount is reached then some predefined value (1) is assigned to primary key column. This causes unique constraint violation, which is ignored due to IGNORE modifier.
In this case the whole file will be nevertheless loaded (but only needed rows amount will be inserted).
If you want to break the process then remove IGNORE modifier and replace SET statement with SIGNAL which sets generic SQL error, and loading process will be terminated.
Do not forget to remove the trigger immediately after performing the import.
Note that COUNT(*) in InnoDB can be pretty slow on large tables. Doing it before each insert might make the load take a while. – Barmar
This is true :(
You may use user-defined variable instead of querying the amount of rows in a table. The trigger will be
CREATE TRIGGER prevent_excess_lines_insertion
BEFORE INSERT
ON users
FOR EACH ROW
BEGIN
IF (#needed_count := #needed_count - 1) < 0 THEN
SET NEW.id = 1;
END IF;
END
Before insertion you must set this variable to the amount of rows to be loaded, for example, SET #needed_count := 50000;. Variable must be set in the same connection strictly !!! And variable's name must not interfere with another variables names if they're used.

MySQL Contiguous Sequential Rows Field even on delete and insert

I need a sequential number sequence for the rows in a table and I need to ensure that it is always sequential with no gaps on insert , when deleted I can leave the row gap, but on insert I must fill the gaps with the new rows. The reason for this is a different system must line up one for one with the row records. Yet the db can be manipulated by others in both the sql end and also via an application ; I am thinking a trigger will allow me to accomplish the something changed part - but how to actually determine if I have gaps and perform the insert of this sequence number - even if I have to maintain the deleted sequences in a separate table and manage is fine - I am required to line up one for one with this other system no matter how the table gets manipulated .
Auto Increment field will not work as a row gets deleted the next insert will be the last auto Increment value. I would need an insert at .. or perhaps keep the row and add a field IsDeleted and force the table as read only or no more inserts / deletes ..but how to do that?
Perhaps when row is inserted I could set sequence number at gap if found or at end if not.
Does somebody have experience doing this kind of thing ?
I know there is a lot here. I tried to document it rather well inside the code and here and there. It uses Stored Procedures. You can naturally pull the code out and not use that method. It uses a main table that houses next available incrementors. It uses safe INNODB Intention Locks for concurrency. It has a reuse table and stored procs to support it.
It does not in anyway use the table myTable. It is shown there for your own imagination based on comments under your question. The summary of that is that you know that you will have gaps upon DELETE. You want some orderly fashion to reuse those slots, those sequence numbers. So, when you DELETE a row, use the stored procs accordingly to add that number. Naturally there is a stored proc to get the next sequence number for reuse and other things.
For the purposes of testing, your sectionType = 'devices'
And best of all it is tested!
Schema:
create table myTable
( -- your main table, the one you cherish
`id` int auto_increment primary key, -- ignore this
`seqNum` int not null, -- FOCUS ON THIS
`others` varchar(100) not null
) ENGINE=InnoDB;
create table reuseMe
( -- table for sequence numbers to reuse
`seqNum` int not null primary key, -- FOCUS ON THIS
`reused` int not null -- 0 upon entry, 1 when used up (reused)
-- the primary key enforces uniqueness
) ENGINE=InnoDB;;
CREATE TABLE `sequences` (
-- table of sequence numbers system-wide
-- this is the table that allocates the incrementors to you
`id` int NOT NULL AUTO_INCREMENT,
`sectionType` varchar(200) NOT NULL,
`nextSequence` int NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `sectionType` (`sectionType`)
) ENGINE=InnoDB;
INSERT sequences(sectionType,nextSequence) values ('devices',1); -- this is the focus
INSERT sequences(sectionType,nextSequence) values ('plutoSerialNum',1); -- not this
INSERT sequences(sectionType,nextSequence) values ('nextOtherThing',1); -- not this
-- the other ones are conceptuals for multi-use of a sequence table
Stored Proc: uspGetNextSequence
DROP PROCEDURE IF EXISTS uspGetNextSequence;
DELIMITER $$
CREATE PROCEDURE uspGetNextSequence(p_sectionType varchar(200))
BEGIN
-- a stored proc to manage next sequence numbers handed to you.
-- driven by the simple concept of a name. So we call it a section type.
-- uses SAFE INNODB Intention Locks to support concurrency
DECLARE valToUse INT;
START TRANSACTION;
SELECT nextSequence into valToUse from sequences where sectionType=p_sectionType FOR UPDATE;
IF valToUse is null THEN
SET valToUse=-1;
END IF;
UPDATE sequences set nextSequence=nextSequence+1 where sectionType=p_sectionType;
COMMIT; -- get it and release INTENTION LOCK ASAP
SELECT valToUse as yourSeqNum; -- return as a 1 column, 1 row resultset
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspGetNextSequence('devices'); -- your section is 'devices'
After you call uspGetNextSequence() it is your RESPONSIBILITY to ensure that that sequence #
is either added into myTable (by confirming it), or that if it fails, you insert it into
the reuse table with a call to uspAddToReuseList(). Not all inserts succeed. Focus on this part.
Because with this code you cannot "put" it back into the sequences table because of
concurrency, other users, and the range already passed by. So, simply, if the insert fails,
put the number into reuseMe via uspAddToReuseList()
.
.
.
Stored Proc: uspAddToReuseList:
DROP PROCEDURE IF EXISTS uspAddToReuseList;
DELIMITER $$
CREATE PROCEDURE uspAddToReuseList(p_reuseNum INT)
BEGIN
-- a stored proc to insert a sequence num into the reuse list
-- marks it available for reuse (a status column called `reused`)
INSERT reuseMe(seqNum,reused) SELECT p_reuseNum,0; -- 0 means it is avail, 1 not
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspAddToReuseList(701); -- 701 needs to be reused
Stored Proc: uspGetOneToReuse:
DROP PROCEDURE IF EXISTS uspGetOneToReuse;
DELIMITER $$
CREATE PROCEDURE uspGetOneToReuse()
BEGIN
-- a stored proc to get an available sequence num for reuse
-- a return of -1 means there aren't any
-- the slot will be marked as reused, the row will remain
DECLARE retNum int; -- the seq number to return, to reuse, -1 means there isn't one
START TRANSACTION;
-- it is important that 0 or 1 rows hit the following condition
-- also note that FOR UPDATE is the innodb Intention Lock
-- The lock is for concurrency (multiple users at once)
SELECT seqNum INTO retNum
FROM reuseMe WHERE reused=0 ORDER BY seqNum LIMIT 1 FOR UPDATE;
IF retNum is null THEN
SET retNum=-1;
ELSE
UPDATE reuseMe SET reused=1 WHERE seqNum=retNum; -- slot used
END IF;
COMMIT; -- release INTENTION LOCK ASAP
SELECT retNum as yoursToReuse; -- >0 or -1 means there is none
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspGetOneToReuse();
Stored Proc: uspCleanReuseList:
DROP PROCEDURE IF EXISTS uspCleanReuseList;
DELIMITER $$
CREATE PROCEDURE uspCleanReuseList()
BEGIN
-- a stored proc to remove rows that have been successfully reused
DELETE FROM reuseMe where reused=1;
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspCleanReuseList();
Stored Proc: uspOoopsResetToAvail:
DROP PROCEDURE IF EXISTS uspOoopsResetToAvail;
DELIMITER $$
CREATE PROCEDURE uspOoopsResetToAvail(p_reuseNum INT)
BEGIN
-- a stored proc to deal with a reuse attempt (sent back to you)
-- that you need to reset the number as still available,
-- perhaps because of a failed INSERT when trying to reuse it
UPDATE reuseMe SET reused=0 WHERE seqNum=p_reuseNum;
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspOoopsResetToAvail(701);
Workflow ideas:
Let GNS mean a call to uspGetNextSequence().
Let RS mean Reuse Sequence via a call to uspGetOneToReuse()
When a new INSERTis desired, call RS:
A. If RS returns -1 then nothing is to be reused so call GNS which returns N. If you can successfully INSERT with myTable.seqNum=N with a confirm, you are done. If you cannot successfully INSERT it, then call uspAddToReuseList(N).
B. If RS returns > 0, note in your head that slot has reuseMe.reused=1, a good thing to remember. So it is assumed to be in the process of being successfully reused. Let's call that sequence number N. If you can successfully INSERT with myTable.seqNum=N with a confirm, you are done. If you cannot successfully INSERT it, then call uspOoopsResetToAvail(N).
When you deem it safe to call uspCleanReuseList() do so. Adding a DATETIME to the reuseMe table might be a good idea, denoting when a row from myTable was orignally deleting and causing the reuseMe row to get its original INSERT.

Concurrent MYSQL procedure calls returning different results with an unrelated SELECT statement

I'm experiencing some very strange transactional behaviour in my MYSQL application.
I've managed to reduce the problem down to a small isolated test case, the code for which I’ve included below:
-- Setup a new environment
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITTED;
DROP DATABASE IF EXISTS `testDB`;
CREATE DATABASE `testDB`;
USE `testDB`;
-- Create a table I want two procedure calls to interact with
CREATE TABLE `tbl_test` (
`id` INT(10) UNSIGNED NOT NULL
, PRIMARY KEY (`id`)
);
-- A second table purely to demonstrate the issue
CREATE TABLE `tbl_test2` (
`id` INT(10) UNSIGNED NOT NULL
);
DELIMITER $$
DROP PROCEDURE IF EXISTS `sp_test` $$
CREATE PROCEDURE `sp_test` ()
BEGIN
START TRANSACTION;
-- CRAZY LINE
SELECT * FROM `tbl_test2`;
-- Insert ignore so both calls don’t try to insert the same row
INSERT IGNORE INTO `tbl_test` (`id`) VALUES (1);
-- Sleep added to make it possible to run concurrently manually
SELECT SLEEP(1) INTO #rubbish;
-- The result I am interested in
SELECT COUNT(*) FROM `tbl_test`;
COMMIT;
END $$
DELIMITER ;
Steps to Reproduce:
Run in the above script to create a test database, two tables and a stored procedure.
In two separate connections, as near to simultaneously as possible, run the stored procedures (you can increase the SLEEP time if you need longer):
USE `testDB`;
CALL sp_test ();
The Problem
When executed concurrently over two separate connections the SELECT COUNT(*) FROM `tbl_test`; statement returns different values for the two calls.
When I follow the steps above, I get back 1 from the first of the two procedure calls and 0 from the second.
My understanding of transactional behaviour and table locking is that when the first call reaches the INSERT statement it will create a lock. The second procedure call will reach the same line but must then wait until the transaction from the first call has been committed. Increasing the sleep time reinforces this idea as the second call will take twice as long to complete. If this is the case however, then the second procedure call should pick up the insert from the first call and both results should be equal to 1.
TL;DR
I'm expecting both to equal 1
Note that I am using READ_COMMITTED as my transaction isolation level.
I've tested this using MYSQL server and MariaDB
Further Weirdness
So at this point I assumed my understanding was incorrect. However, I then noticed that by removing the line SELECT * FROM `tbl_test2`; the results suddenly produced the expected values!
I've been experimenting with the script but essentially, including a SELECT statement to any table within the database before the INSERT line causes unanticipated results. I have absolutely no idea why this is the case.
Questions
Is my understanding of the expected transactional behaviour correct?
Why on earth does the SELECT statement to an unrelated table cause the transactional locking to fail?
If anyone can shed some light on this I would be very grateful!

Database inconsistent state MySQL/InnoDB

I am wondering if it is necessary to use locking in most likely concurrent environment and how in following case. Using MySQL database server with InnoDB engine
Let's say I have a table
CREATE TABLE `A` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
`m_id` INT NOT NULL, -- manual id
`name` VARCHAR(10)
) ENGINE=INNODB;
And the procedure
CREATE PROCEDURE `add_record`(IN _NAME VARCHAR(10))
BEGIN
DECLARE _m_id INT;
DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK;
START TRANSACTION;
SELECT (`m_id` + 1) INTO _m_id FROM `A` WHERE `id` = (SELECT MAX(`id`) FROM `A`);
INSERT INTO `A`(`m_id`, `name`) VALUES(_m_id, _NAME);
COMMIT;
END$$
Like you see the fact is that I am increasing m_id manually and concurrent transactions are most likely happening. I can't make my mind if database might become in inconsistent state. Also using FOR UPDATE and LOCK IN SHARE MODE has no point in this situation as transaction deals with new records and has nothing to do with updates on a specific row. Further LOCK TABLES are not allowed in stored procedures and is quite insufficient.
So, my question is how to avoid inconsistent state in marked scenario if it is possible to happen actually. Any advice will be grateful
transaction deals with new records and has nothing to do with updates on a specific row
Such a new record is known as a phantom:
phantom
A row that appears in the result set of a query, but not in the result set of an earlier query. For example, if a query is run twice within a transaction, and in the meantime, another transaction commits after inserting a new row or updating a row so that it matches the WHERE clause of the query.
This occurrence is known as a phantom read. It is harder to guard against than a non-repeatable read, because locking all the rows from the first query result set does not prevent the changes that cause the phantom to appear.
Among different isolation levels, phantom reads are prevented by the serializable read level, and allowed by the repeatable read, consistent read, and read uncommitted levels.
So to prevent phantoms from occurring on any statement, one can simply set the transaction isolation level to be SERIALIZABLE. InnoDB implements this using next-key locks, which not only locks the records that your queries match but also locks the gaps between those records.
The same can be accomplished on a per-statement basis by using locking reads, such as you describe in your question: LOCK IN SHARE MODE or FOR UPDATE (the former allows concurrent sessions to read the matching records while the lock is in place, whilst the latter does not).
First, a sequence table
CREATE TABLE m_id_sequence (
id integer primary key auto_increment
);
and then alter the procedure to get the next m_id from the sequence table
DELIMITER $$
CREATE PROCEDURE `add_record`(IN _NAME VARCHAR(10))
BEGIN
DECLARE _m_id INT;
DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK;
START TRANSACTION;
INSERT INTO m_id_sequence VALUES ();
SET _m_id = LAST_INSERT_ID();
INSERT INTO `A`(`m_id`, `name`) VALUES(_m_id, _NAME);
COMMIT;
END$$
DELIMITER ;

MySQL transaction and triggers

Hey guys, here is one I am not able to figure out. We have a table in database, where PHP inserts records. I created a trigger to compute a value to be inserted as well. The computed value should be unique. However it happens from time to time that I have exact same number for few rows in the table. The number is combination of year, month and day and a number of the order for that day. I thought that single operation of insert is atomic and table is locked while transaction is in progress. I need the computed value to be unique...The server is version 5.0.88. Server is Linux CentOS 5 with dual core processor.
Here is the trigger:
CREATE TRIGGER bi_order_data BEFORE INSERT ON order_data
FOR EACH ROW BEGIN
SET NEW.auth_code = get_auth_code();
END;
Corresponding routine looks like this:
CREATE FUNCTION `get_auth_code`() RETURNS bigint(20)
BEGIN
DECLARE my_auth_code, acode BIGINT;
SELECT MAX(d.auth_code) INTO my_auth_code
FROM orders_data d
JOIN orders o ON (o.order_id = d.order_id)
WHERE DATE(NOW()) = DATE(o.date);
IF my_auth_code IS NULL THEN
SET acode = ((DATE_FORMAT(NOW(), "%y%m%d")) + 100000) * 10000 + 1;
ELSE
SET acode = my_auth_code + 1;
END IF;
RETURN acode;
END
I thought that single operation of
insert is atomic and table is locked
while transaction is in progress
Either table is locked (MyISAM is used) or records may be locked (InnoDB is used), not both.
Since you mentioned "transaction", I assume that InnoDB is in use.
One of InnoDB advantages is absence of table locks, so nothing will prevent many triggers' bodies to be executed simultaneously and produce the same result.