MySql unique index over two colums - auto increment [duplicate] - mysql

I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.

if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)

Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;

you can make the two column primary key unique and the auto-increment key primary.

The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;

Related

Get id of records updated with ON DUPLICATE KEY UPDATE

I want to know if there is a way to get the ID of records updated with ON DUPLICATE KEY UDATE.
For example, I have the users table with the following schema:
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
`username` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idx-users-email` (`email`)
);
and insert some users:
INSERT INTO users (email, username) VALUES ("pioz#example.org", "pioz"),("luke#example.org", "luke"),("mike#example.org", "mike");
the result is:
+----+------------------+----------+
| id | email | username |
+----+------------------+----------+
| 1 | pioz#example.org | pioz |
| 2 | luke#example.org | luke |
| 3 | mike#example.org | mike |
+----+------------------+----------+
Now I want to know if, with a query like the following one, is possible to get the ID of the updated records:
INSERT INTO users (email, username) VALUES ("luke#example.org", "luke2"),("mike#example.org", "mike2") ON DUPLICATE KEY UPDATE username=VALUES(username);
In this example ID 2 and 3.
It seems that the only solution is to used a stored procedure. Here is an example for one row, which could be expanded.
See dbFiddle link below for schema and testing.
CREATE PROCEDURE add_update_user(IN e_mail VARCHAR(25), IN user_name VARCHAR(25) )
BEGIN
DECLARE maxB4 INT DEFAULT 0;
DECLARE current INT DEFAULT 0;
SELECT MAX(ID) INTO maxB4 FROM users;
INSERT INTO users (email, username) VALUES
(e_mail, user_name)
ON DUPLICATE KEY UPDATE username=VALUES(username);
SELECT ID INTO current FROM users WHERE email =e_mail;
SELECT CASE WHEN maxB4 < current THEN CONCAT('New user with ID ', current, ' created')
ELSE CONCAT('User with ID ', current, ' updated') END Report;
/*SELECT CASE WHEn maxB4 < current THEN 1 ELSE 0 END;*/
END
call add_update_user('jake#example.com','Jake');
| Report |
| :------------------------- |
| New user with ID 6 created |
call add_update_user('jake#example.com','Jason');
| Report |
| :--------------------- |
| User with ID 6 updated |
db<>fiddle here
Plan A: Use the technique in the ref manual -- see LAST_INSERT_ID()
Plan B: Get rid of id and make email the PRIMARY KEY

How to Add Auto Incremental ID Column to Table with data

How to Add Auto Incremental ID Column to Table with data. ie. New column will fill with ids in old data
Old user table with data
Name, email,
abc, abc#gmail.com
abcd, abcd#gmial.com
EXPECTED OUTPUT
id, Name, email,
1, abc, abc#gmail.com
2, abcd, abcd#gmial.com
You can write:
ALTER TABLE MY_TABLE ADD id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
Alter table does no allow you to order the new field so perhaps you need to do it in 2 stages. For example
DROP TABLE IF EXISTS T;
CREATE TABLE T (NAME VARCHAR(10),EMAIL VARCHAR(20));
INSERT INTO T VALUES('ABC','abc#gmail.com'),('ABCD','abcd#gmial.com');
ALTER TABLE T
ADD COLUMN ID INT FIRST;
UPDATE T JOIN(
SELECT NAME, ROW_NUMBER() OVER (ORDER BY NAME DESC) RN FROM T) s
ON S.NAME = T.NAME
SET T.ID = S.RN
;
ALTER TABLE T
MODIFY COLUMN ID INT AUTO_INCREMENT PRIMARY KEY;
SELECT * FROM T;
Where id is added and then updated for existing records using row_number with order by
With this result
+----+------+----------------+
| ID | NAME | EMAIL |
+----+------+----------------+
| 1 | ABCD | abcd#gmial.com |
| 2 | ABC | abc#gmail.com |
+----+------+----------------+
2 rows in set (0.00 sec)
The second Alter statement then modifys id.
show create table t;
CREATE TABLE `t` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`NAME` varchar(10) DEFAULT NULL,
`EMAIL` varchar(20) DEFAULT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
If you are not on mysql version 8 or above you will need to replace the row_number bit with row number simulation using variables.
Not that it makes any difference anyway since any query would still need to include an order by to get the outcome you want.

soft delete on update in mysql with multiple schemas

I am in this situation:
Background
I have 2 database schemas called "prod" and "stg".
"prod" contains 2 tables called "parent" and "child"
"stg" only has the "parent" table
"parent" table defination is the same across "prod" and "stg" schemas.
In the case of deleting records, "parent" table is defined as soft delete (logically deletion, i.e. set delete_flg as "1") whereas the "child" table is true delete (physically remove the record)
Goal
I am trying to achieve the following goal:
when and only when both "prod"."parent" and "stg"."parent" are deleted (no matter physically or logically, or does not exist on one side) then automatically cascade a delete operation(physically remove) to the record in "prod"."child" table whose "SP_ID" matches the value in "parent".
For example, assuming I have
"prod"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
"prod"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
"stg"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 0 |
+----+---------+--------+
"prod"."child"
+----+---------+
| SP_ID | JOB_KEY |
+----+---------+
| 1 | key |
+----+---------+
, if I execute a sql update "stg"."parent" set DELETE_FLG = 1 where SP_ID = 1, which logically delete the last "existing" record in "parent" table that has SP_ID 1, then the record in "prod"."child" will also be automatically phycially deleted by mysql.
Question
I have been thinking about making the SP_ID in the child table as a foreign key referencing the one in parent able (https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html)
however,
a) I don't know whether it is possible to reference multiple tables in differnet schemas, and
b) It seems mysql only support cascading same operation, i.e. delete on parent then delete the child OR update on parent then update the child. But in my case, I want a update on parent then delete the child.
Could somebody help me out here please?
Is this possible to be achieved in mysql? or I have to do this in application layer?
Table definition
CREATE TABLE `prod`.`parent` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`SP_NAME` varchar(100) NOT NULL COMMENT '',
`DELETE_FLG` tinyint(1) NOT NULL DEFAULT '0' COMMENT '',
PRIMARY KEY (`SP_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
CREATE TABLE `prod`.`child` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`JOB_KEY` varchar(11) NOT NULL,
PRIMARY KEY (`SP_ID`,`JOB_KEY`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
CREATE TABLE `stg`.`parent` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`SP_NAME` varchar(100) NOT NULL COMMENT '',
`DELETE_FLG` tinyint(1) NOT NULL DEFAULT '0' COMMENT '',
PRIMARY KEY (`SP_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
with the hint of using triggers, This is my solution which works:
Add 2 triggers(one is after update, on is after delete) to both prod and stg parent tables.
# after update trigger
CREATE DEFINER=`root`#`localhost` TRIGGER `stg`.`parent_AFTER_UPDATE` AFTER UPDATE ON `parent` FOR EACH ROW
BEGIN
IF (
select
count(*)
from(
select
*
from `prod`.`parent`
where `prod`.`parent`.id = old.id and `prod`.`parent`.delete_flg = 0
Union all
select
*
from `stg`.`parent`
where `stg`.`parent`.id = old.id and `stg`.`parent`.delete_flg = 0
) as a
) = 0 THEN
DELETE FROM `prod`.`child` WHERE `prod`.`child`.id = old.id;
END IF;
END
# after delete trigger
CREATE DEFINER=`root`#`localhost` TRIGGER `stg`.`parent_AFTER_DELETE` AFTER DELETE ON `parent` FOR EACH ROW
BEGIN
IF (
select
count(*)
from(
select
*
from `prod`.`parent`
where `prod`.`parent`.id = old.id and `prod`.`parent`.delete_flg = 0
Union all
select
*
from `stg`.`parent`
where `stg`.`parent`.id = old.id and `stg`.`parent`.delete_flg = 0
) as a
) = 0 THEN
DELETE FROM `prod`.`child` WHERE `prod`.`child`.id = old.id;
END IF;
END

Two primary keys & auto increment

I have a MySQL table with two fields as primary key (ID & Account), ID has AUTO_INCREMENT.
This results in the following MySQL table:
ID | Account
------------------
1 | 1
2 | 1
3 | 2
4 | 3
However, I expected the following result (restart AUTO_INCREMENT for each Account):
ID | Account
------------------
1 | 1
2 | 1
1 | 2
1 | 3
What is wrong in my configuration? How can I fix this?
Thanks!
Functionality you're describing is possible only with MyISAM engine. You need to specify the CREATE TABLE statement like this:
CREATE TABLE your_table (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
account_id INT UNSIGNED NOT NULL,
PRIMARY KEY(account_id, id)
) ENGINE = MyISAM;
If you use an innoDB engine, you can use a trigger like this:
CREATE TRIGGER `your_table_before_ins_trig` BEFORE INSERT ON `your_table`
FOR EACH ROW
begin
declare next_id int unsigned default 1;
-- get the next ID for your Account Number
select max(ID) + 1 into next_id from your_table where Account = new.Account;
-- if there is no Account number yet, set the ID to 1 by default
IF next_id IS NULL THEN SET next_id = 1; END IF;
set new.ID= next_id;
end#
Note ! your delimiter column is # in the sql statement above !
This solution works for a table like yours if you create it without any auto_increment functionality like this:
CREATE TABLE IF NOT EXISTS `your_table` (
`ID` int(11) NOT NULL,
`Account` int(11) NOT NULL,
PRIMARY KEY (`ID`,`Account`)
);
Now you can insert your values like this:
INSERT INTO your_table (`Account`) VALUES (1);
INSERT INTO your_table (`Account`, `ID`) VALUES (1, 5);
INSERT INTO your_table (`Account`) VALUES (2);
INSERT INTO your_table (`Account`, `ID`) VALUES (3, 10205);
It will result in this:
ID | Account
------------------
1 | 1
2 | 1
1 | 2
1 | 3

mysql two column primary key with auto-increment

I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;