how to create unique sequence in mysql - mysql

How to create unique sequence number in MySQL?
The scenario goes like, that in table1 the data say "A" in row1 can appear more than once.
So when it is first occurring a sequence no will be assigned to it, and the same will be assigned to it each time it appears again.
But the data "B" (say the next data entered) will have the next sequence no.
So i cant use auto_increment in this scenario. Say, i have to check the conditions c1 and c2 for this unique sequence no.
Looking for a stored procedure to implement this. Hope i am clear with my problem.

CREATE TABLE `seq` (
`n` BIGINT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`n`)
);
DELIMITER $$
DROP PROCEDURE IF EXISTS getseq$$
CREATE FUNCTION getseq() RETURN BIGINT
BEGIN
DECLARE r BIGINT;
INSERT INTO `seq` (`n`) VALUES (NULL);
SELECT MAX(`n`) INTO r FROM `seq`;
COMMIT;
RETURN r;
END$$
DELIMITER ;
Concurrent transactions should be revised, but I think it would work, because indeed the mark of auto-increment is shared across transactions, but not the id resulting from the insert you made into the table.

Related

How can auto-Incrementing be maintained when concurrent transactions occur on a compound key In MYSQL?

I recently encountered an error in my application with concurrent transactions. Previously, auto-incrementing for compound key was implemented using the application itself using PHP. However, as I mentioned, the id got duplicated, and all sorts of issues happened which I painstakingly fixed manually afterward.
Now I have read about related issues and found suggestions to use trigger.
So I am planning on implementing a trigger somewhat like this.
DELIMITER $$
CREATE TRIGGER auto_increment_my_table
BEFORE INSERT ON my_table FOR EACH ROW
BEGIN
SET NEW.id = SELECT MAX(id) + 1 FROM my_table WHERE type = NEW.type;
END $$
DELIMITER ;
But my doubt regarding concurrency still remains. Like what if this trigger was executed concurrently and both got the same MAX(id) when querying?
Is this the correct way to handle my issue or is there any better way?
An example - how to solve autoincrementing in compound index.
CREATE TABLE test ( id INT,
type VARCHAR(192),
value INT,
PRIMARY KEY (id, type) );
-- create additional service table which will help
CREATE TABLE test_sevice ( type VARCHAR(192),
id INT AUTO_INCREMENT,
PRIMARY KEY (type, id) ) ENGINE = MyISAM;
-- create trigger which wil generate id value for new row
CREATE TRIGGER tr_bi_test_autoincrement
BEFORE INSERT
ON test
FOR EACH ROW
BEGIN
INSERT INTO test_sevice (type) VALUES (NEW.type);
SET NEW.id = LAST_INSERT_ID();
END
db<>fiddle here
creating a service table just to auto increment a value seems less than ideal for me. – Mohamed Mufeed
This table is extremely tiny - you may delete all records except one per group with largest autoincremented value in this group anytime. – Akina
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=61f0dc36db25dd5f0cf4647d8970cdee
You may schedule excess rows removing (for example, daily) in service event procedure.
I have managed to solve this issue.
The answer was somewhat in the direction of Akina's Answer. But not quite exactly.
The way I solved it did indeed involved an additional table but not like the way He suggested.
I created an additional table to store meta data about transactions.
Eg: I had table_key like this
CREATE TABLE `journals` (
`id` bigint NOT NULL AUTO_INCREMENT,
`type` smallint NOT NULL DEFAULT '0',
`trans_no` bigint NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `transaction` (`type`,`trans_no`)
)
So I created a meta_journals table like this
CREATE TABLE `meta_journals` (
`type` smallint NOT NULL,
`next_trans_no` bigint NOT NULL,
PRIMARY KEY (`type`),
)
and seeded it with all the different types of journals and the next sequence number.
And whenever I insert a new transaction to the journals I made sure to increment the next_trans_no of the corresponding type in the meta_transactions table. This increment operation is issued inside the same database TRANSACTION, i.e. inside the BEGIN AND COMMIT
This allowed me to use the exclusive lock acquired by the UPDATE statement on the row of meta_journals table. So when two insert statement is issued for the journal concurrently, One had to wait until the lock acquired by the other transaction is released by COMMITing.

Duplicate values in reference tables

Our application calls a stored procedure to normalize it's data to reference tables, after which it inserts a record into the main table partially containing values and partially containing ids that map to the reference tables. This is one of the stored procedures:
CREATE PROCEDURE `sp_name`(IN valueIn varchar(100), OUT valueOut int)
BEGIN
declare maxid int;
declare countid int;
select max(id) into valueOut from tableName where fieldName=valueIn;
IF valueOut is NULL
THEN
start transaction with consistent snapshot;
select count(*) into countid from tableName where fieldName=valueIn;
IF countid=0
THEN
insert into tableName (fieldName) values (valueIn);
select last_insert_id() into valueOut;
ELSE
select max(id) into valueOut from tableName where fieldName=ValueIn;
end IF;
commit;
end IF;
END
When called manually this works fine but, when being called in production we end up with multiple duplicate values in the reference tables.
Transaction isolation level is REPEATABLE_READ.
Ref table:
CREATE TABLE `tableName` (
`id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
`fieldName` varchar(45) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=100 DEFAULT CHARSET=utf8
Using a unique key constraint on the field fieldName isn't a good option. We have tried this but then instead of getting duplicate values, we see that the auto increment skips ID's. And we are trying to preserve ID's so that we do not need to over allocate when it comes to data types. Our main table is huge (multi billion) so we have to make efficient use of data types.
Anybody out there that understands this phenomenon?
There are a lot of hurdles you have to clear if you want to build your own replacement for auto_increment. You'll find problems of serializability, concurrency, performance (usually related to locking), etc.
I think the simplest solution might be to use auto_increment on a column of type bigint unsigned. The maximum value of an unsigned integer is 4,294,967,295: roughly 4x10^9. The maximum value of an unsigned bigint is 18,446,744,073,709,551,615: roughly 1.8x10^19.
The auto_increment will still skip id numbers, but that's by design, and it shouldn't cause trouble with a range of 1.8x10^19.
Before you commit to this path, test big numbers with your client software. Some still don't deal gracefully with bigint, signed or not.

How to prevent creation of records where the value of two fields is the same?

I have the following table.
CREATE TABLE people(
first_name VARCHAR(128) NOT NULL,
nick_name VARCHAR(128) NULL
)
I would like to prevent people from having their nickname be the same as their firstname if they attempt that insertion. I do not want to create an index on either of the columns just a rule to prevent the insertion of records where the first_name and nick_name are the same.
Is there a way to create a rule to prevent insertion of records where the first_name would equal the nick_name?
CREATE TRIGGER `nicknameCheck` BEFORE INSERT ON `people` FOR EACH ROW begin
IF (new.first_name = new.nick_name) THEN
SET new.nick_name = null;
END IF;
END
Or you can set first_name to NULL which will cause SQL error and you can handle it and show some warning.
You only need triggers for BEFORE INSERT and BEFORE UPDATE. Let these check the values and abort the operation, if they are equal.
Caveat: On older but still widely used versions of MySQL (before 5.5 IIRC) you need to do something bad, such as read from the written table or easier read from an inexistant table/column (in order to abort).
AFTER INSERT trigger to test and remove if same ...
CREATE TABLE ek_test (
id INT PRIMARY KEY NOT NULL AUTO_INCREMENT,
one INT NOT NULL,
two INT NOT NULL
);
delimiter //
CREATE TRIGGER ek_test_one_two_differ AFTER INSERT ON ek_test
FOR EACH ROW
BEGIN
IF (new.one = new.two) THEN
DELETE FROM ek_test WHERE id = new.id;
END IF;
END//
delimiter ;
INSERT INTO ek_test (one, two) VALUES (1, 1);
SELECT * FROM ek_test;
NOTE you will also need AFTER UPDATE trigger.

Emulate auto-increment in MySQL/InnoDB

Assume I am going to emulate auto-increment in MySQL/InnoDB
Conditions
Using MySQL/InnoDB
The ID field don't have unique index, nor it is a PK
Is it possible to emulate only using program logics, without table level lock.
Thanks.
Use a sequence table and a trigger - something like this:
drop table if exists users_seq;
create table users_seq
(
next_seq_id int unsigned not null default 0
)engine = innodb;
drop table if exists users;
create table users
(
user_id int unsigned not null primary key,
username varchar(32) not null
)engine = innodb;
insert into users_seq values (0);
delimiter #
create trigger users_before_ins_trig before insert on users
for each row
begin
declare id int unsigned default 0;
select next_seq_id + 1 into id from users_seq;
set new.user_id = id;
update users_seq set next_seq_id = id;
end#
delimiter ;
insert into users (username) values ('f00'),('bar'),('bish'),('bash'),('bosh');
select * from users;
select * from users_seq;
insert into users (username) values ('newbie');
select * from users;
select * from users_seq;
CREATE TABLE sequence (id INTEGER); -- possibbly add a name;
INSERT INTO sequence VALUES (1); -- starting value
SET AUTOCOMMIT=0;
START TRANSACTION;
UPDATE sequence SET id = LAST_INSERT_ID(id+1);
INSERT INTO actualtable (non_autoincrementing_key) VALUES (LAST_INSERT_ID());
COMMIT;
SELECT LAST_INSERT_ID(); Is even a session-safe value to check which ID you got. Be sure your table support transactions, or that holes in a sequence are no problem.
Create another table with a single row and column that stores the next id value. Then create an insert trigger on the original table that increments the value in the second table, grabs it, and uses that for the ID column on the first table. You would need to be careful with the way you do the select and update to ensure they are atomic.
Essentially you are emulating an Oracle sequence in MySQL. It would cause a lock on single row in the sequence table though, so that may make it inappropriate for what you are doing.
ETA:
Another similar but maybe better performing option would be to create a second "sequence" table that just has a single auto-increment PK column and no other data. Have your insert trigger insert a row into that table and use the generated ID from there to populate the ID in the original table. Then either have the trigger or another process periodically delete all the rows from the sequence table to clean it up.
sequence table need to have id as the autoincrement PK

MySql autoincrement column increases by 10 problem

I am a user of a some host company which serves my MySql database. Due to their replication problem, the autoincrement values increses by 10, which seems to be a common problem.
My question is how can I simulate (safely) autoincrement feature so that the column have an consecutive ID?
My idea was to implement some sequence mechanism to solve my problem, but I do not know if it is a best option. I had found such a code snipset over the web:
DELIMITER ;;
DROP TABLE IF EXISTS `sequence`;;
CREATE TABLE `sequence` (
`name` CHAR(16) NOT NULL,
`value` BIGINT UNSIGNED NOT NULL,
PRIMARY KEY (`name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;;
DROP FUNCTION IF EXISTS `nextval`;
CREATE FUNCTION `nextval`(thename CHAR(16) CHARSET latin1)
RETURNS BIGINT UNSIGNED
MODIFIES SQL DATA
SQL SECURITY DEFINER
BEGIN
INSERT INTO `sequence`
SET `name`=thename,
`value`=(#val:=##auto_increment_offset)+##auto_increment_increment
ON DUPLICATE KEY
UPDATE `value`=(#val:=`value`)+##auto_increment_increment;
RETURN #val;
END ;;
DELIMITER ;
which seems quite all correct. My second question is if this solution is concurrent-safe? Of course INSERT statement is, but what about ON DUPLICATE KEY update?
Thanks!
Why do you need to have it in the first place?
Even with auto_increment_increment == 1 you are not guaranteed, that the autoincrement field in the table will have consecutive values (what if the rows are deleted, hmm?).
With autoincrement you are simply guaranteed by the db engine, that the field will be unique, nothing else, really.
EDIT: I want to reiterate: In my opinion, it is not a good idea to assume things like concurrent values of an autoincrement column, because it is going to bite you later.
EDIT2: Anyway, this can be "solved" by an "on insert" trigger
create trigger "sequence_b_ins" before insert on `sequence`
for each row
begin
NEW.id = select max(id)+1 from `sequence`;
end
Or something along these lines (sorry, not tested)
Another option would be to use a stored proc to do the insert and have it either select max id from your table or keep another table with the current id being used and update as id's are used.