soft delete on update in mysql with multiple schemas - mysql

I am in this situation:
Background
I have 2 database schemas called "prod" and "stg".
"prod" contains 2 tables called "parent" and "child"
"stg" only has the "parent" table
"parent" table defination is the same across "prod" and "stg" schemas.
In the case of deleting records, "parent" table is defined as soft delete (logically deletion, i.e. set delete_flg as "1") whereas the "child" table is true delete (physically remove the record)
Goal
I am trying to achieve the following goal:
when and only when both "prod"."parent" and "stg"."parent" are deleted (no matter physically or logically, or does not exist on one side) then automatically cascade a delete operation(physically remove) to the record in "prod"."child" table whose "SP_ID" matches the value in "parent".
For example, assuming I have
"prod"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
"prod"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
"stg"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 0 |
+----+---------+--------+
"prod"."child"
+----+---------+
| SP_ID | JOB_KEY |
+----+---------+
| 1 | key |
+----+---------+
, if I execute a sql update "stg"."parent" set DELETE_FLG = 1 where SP_ID = 1, which logically delete the last "existing" record in "parent" table that has SP_ID 1, then the record in "prod"."child" will also be automatically phycially deleted by mysql.
Question
I have been thinking about making the SP_ID in the child table as a foreign key referencing the one in parent able (https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html)
however,
a) I don't know whether it is possible to reference multiple tables in differnet schemas, and
b) It seems mysql only support cascading same operation, i.e. delete on parent then delete the child OR update on parent then update the child. But in my case, I want a update on parent then delete the child.
Could somebody help me out here please?
Is this possible to be achieved in mysql? or I have to do this in application layer?
Table definition
CREATE TABLE `prod`.`parent` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`SP_NAME` varchar(100) NOT NULL COMMENT '',
`DELETE_FLG` tinyint(1) NOT NULL DEFAULT '0' COMMENT '',
PRIMARY KEY (`SP_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
CREATE TABLE `prod`.`child` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`JOB_KEY` varchar(11) NOT NULL,
PRIMARY KEY (`SP_ID`,`JOB_KEY`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
CREATE TABLE `stg`.`parent` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`SP_NAME` varchar(100) NOT NULL COMMENT '',
`DELETE_FLG` tinyint(1) NOT NULL DEFAULT '0' COMMENT '',
PRIMARY KEY (`SP_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''

with the hint of using triggers, This is my solution which works:
Add 2 triggers(one is after update, on is after delete) to both prod and stg parent tables.
# after update trigger
CREATE DEFINER=`root`#`localhost` TRIGGER `stg`.`parent_AFTER_UPDATE` AFTER UPDATE ON `parent` FOR EACH ROW
BEGIN
IF (
select
count(*)
from(
select
*
from `prod`.`parent`
where `prod`.`parent`.id = old.id and `prod`.`parent`.delete_flg = 0
Union all
select
*
from `stg`.`parent`
where `stg`.`parent`.id = old.id and `stg`.`parent`.delete_flg = 0
) as a
) = 0 THEN
DELETE FROM `prod`.`child` WHERE `prod`.`child`.id = old.id;
END IF;
END
# after delete trigger
CREATE DEFINER=`root`#`localhost` TRIGGER `stg`.`parent_AFTER_DELETE` AFTER DELETE ON `parent` FOR EACH ROW
BEGIN
IF (
select
count(*)
from(
select
*
from `prod`.`parent`
where `prod`.`parent`.id = old.id and `prod`.`parent`.delete_flg = 0
Union all
select
*
from `stg`.`parent`
where `stg`.`parent`.id = old.id and `stg`.`parent`.delete_flg = 0
) as a
) = 0 THEN
DELETE FROM `prod`.`child` WHERE `prod`.`child`.id = old.id;
END IF;
END

Related

Get id of records updated with ON DUPLICATE KEY UPDATE

I want to know if there is a way to get the ID of records updated with ON DUPLICATE KEY UDATE.
For example, I have the users table with the following schema:
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
`username` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idx-users-email` (`email`)
);
and insert some users:
INSERT INTO users (email, username) VALUES ("pioz#example.org", "pioz"),("luke#example.org", "luke"),("mike#example.org", "mike");
the result is:
+----+------------------+----------+
| id | email | username |
+----+------------------+----------+
| 1 | pioz#example.org | pioz |
| 2 | luke#example.org | luke |
| 3 | mike#example.org | mike |
+----+------------------+----------+
Now I want to know if, with a query like the following one, is possible to get the ID of the updated records:
INSERT INTO users (email, username) VALUES ("luke#example.org", "luke2"),("mike#example.org", "mike2") ON DUPLICATE KEY UPDATE username=VALUES(username);
In this example ID 2 and 3.
It seems that the only solution is to used a stored procedure. Here is an example for one row, which could be expanded.
See dbFiddle link below for schema and testing.
CREATE PROCEDURE add_update_user(IN e_mail VARCHAR(25), IN user_name VARCHAR(25) )
BEGIN
DECLARE maxB4 INT DEFAULT 0;
DECLARE current INT DEFAULT 0;
SELECT MAX(ID) INTO maxB4 FROM users;
INSERT INTO users (email, username) VALUES
(e_mail, user_name)
ON DUPLICATE KEY UPDATE username=VALUES(username);
SELECT ID INTO current FROM users WHERE email =e_mail;
SELECT CASE WHEN maxB4 < current THEN CONCAT('New user with ID ', current, ' created')
ELSE CONCAT('User with ID ', current, ' updated') END Report;
/*SELECT CASE WHEn maxB4 < current THEN 1 ELSE 0 END;*/
END
call add_update_user('jake#example.com','Jake');
| Report |
| :------------------------- |
| New user with ID 6 created |
call add_update_user('jake#example.com','Jason');
| Report |
| :--------------------- |
| User with ID 6 updated |
db<>fiddle here
Plan A: Use the technique in the ref manual -- see LAST_INSERT_ID()
Plan B: Get rid of id and make email the PRIMARY KEY

How to do this with foreign keys?

I have a table called categories, with the following fields:
id (primary key, identifies the categories)
user_id (foreign key, users.id, the creator of the current category)
category_id (foreign key, categories.id, the id of the parent category id)
Inside this table I have some records that are accessible to everyone.
For these records, both the user_id and the category_id fields are NULL.
In addition, users can create their own records (where the user_id field is not NULL), but every user can access only those that he has made himself.
Each record must meet these conditions:
user_id values must be valid
if the category_id is given then it must be a valid categories.id and that record must be created by the given user
the value of the category_id can be a categoies.id where the user_id is NULL
How can I do this? I think I need more foreign keys for this, but not sure how to do it.
Some examples that may help you to understand my problem:
Lets say I have the following records inside the categories table:
+--------+-------------+-----------------+
| id | user_id | category_id |
+--------+-------------+-----------------+
| 1 | NULL | NULL |
+--------+-------------+-----------------+
| 2 | NULL | NULL |
+--------+-------------+-----------------+
| 3 | 1 | 1 |
+--------+-------------+-----------------+
| 4 | 2 | 1 |
+--------+-------------+-----------------+
and lets say that I want to insert these records:
+-------------+-----------------+----------------------------------------------------------------+
| user_id | category_id | is it insertable?
+-------------+-----------------+----------------------------------------------------------------+
| 1 | NULL | yes, because the value of the user_id is valid id |
+-------------+-----------------+----------------------------------------------------------------+
| 1 | 1 | yes, because the record with id of 1 is created by NULL |
+-------------+-----------------+----------------------------------------------------------------+
| 1 | 3 | yes, because the record with id of 3 is created by user #1 |
+-------------+-----------------+----------------------------------------------------------------+
| 1 | 4 | no, because the record with id of 4 is created by another user |
+-------------+-----------------+----------------------------------------------------------------+
Categories table:
CREATE TABLE `categories` (
`id` int(11) NOT NULL,
`user_id` int(11) DEFAULT NULL,
`category_id` int(11) DEFAULT NULL,
`name` varchar(150) NOT NULL,
`type` enum('income','expense') NOT NULL DEFAULT 'income',
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `categories`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `category_id` (`category_id`,`user_id`,`name`),
ADD KEY `user_id` (`user_id`);
ALTER TABLE `categories`
ADD CONSTRAINT `categories_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
ADD CONSTRAINT `categories_ibfk_2` FOREIGN KEY (`category_id`) REFERENCES `categories` (`id`) ON DELETE CASCADE ON UPDATE CASCADE;
Unfortunately, MySQL foreign keys can go so far. This is a bit of advanced logic. Might I recommend using a MySQL trigger?
DELIMITER $$
CREATE PROCEDURE `check_categories_user_id`(IN p_category_id INT(11), IN p_user_id INT(11))
BEGIN
IF (p_category_id IS NOT NULL) THEN
SET #other_user_id = (SELECT user_id
FROM categories
WHERE id = p_category_id);
IF (p_user_id <> #other_user_id) THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'check constraint on categories.user_id failed';
END IF;
END IF;
END$$
CREATE TRIGGER `categories_before_update` BEFORE UPDATE ON `categories`
FOR EACH ROW
BEGIN
CALL check_categories_user_id(new.category_id, new.user_id);
END$$
CREATE TRIGGER `categories_before_insert` BEFORE INSERT ON `categories`
FOR EACH ROW
BEGIN
CALL check_categories_user_id(new.category_id, new.user_id);
END$$
DELIMITER ;
See dbfiddle here. (If you remove the last INSERT query, the SELECT query works as expected)
Additionally, to avoid some overhead (The SELECT query in the trigger), you can just enforce NULL on user_id when category_id IS NOT NULL (Using triggers like above). If category_id is NOT NULL, you'll just fetch the user_id from the parent row (Or the root row (parent's parent row...etc), if nested).

MySQL recursive procedure to delete a record

I have a table, Models that consists of these (relevant) attributes:
-- -----------------------------------------------------
-- Table `someDB`.`Models`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `someDB`.`Models` (
`model_id` MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT,
`type_id` SMALLINT UNSIGNED NOT NULL,
-- someOtherAttributes
PRIMARY KEY (`model_id`),
ENGINE = InnoDB;
+---------+---------+
| model_id| type_id |
+---------+---------+
| 1 | 4 |
| 2 | 4 |
| 3 | 5 |
| 4 | 3 |
+---------+---------+
And table Model_Hierarchy that shows the parent & child relationship (again, showing only the relevant attributes):
-- -----------------------------------------------------
-- Table `someDB`.`Model_Hierarchy`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `someDB`.`Model_Hierarchy` (
`parent_id` MEDIUMINT UNSIGNED NOT NULL,
`child_id` MEDIUMINT UNSIGNED NOT NULL,
-- someOtherAttributes,
INDEX `fk_Model_Hierarchy_Models1_idx` (`parent_id` ASC),
INDEX `fk_Model_Hierarchy_Models2_idx` (`child_id` ASC),
PRIMARY KEY (`parent_id`, `child_id`),
CONSTRAINT `fk_Model_Hierarchy_Models1`
FOREIGN KEY (`parent_id`)
REFERENCES `someDB`.`Models` (`model_id`)
ON DELETE CASCADE
ON UPDATE NO ACTION,
CONSTRAINT `fk_Model_Hierarchy_Models2`
FOREIGN KEY (`child_id`)
REFERENCES `someDB`.`Models` (`model_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
+-----------+----------+
| parent_id | child_id |
+-----------+----------+
| 1 | 2 |
| 2 | 4 |
| 3 | 4 |
+-----------+----------+
If there is a Model that is not a parent or child (at some point) of another Model whose type is 5, it is not valid and hence should be deleted.
This means that Model 1, 2 should be deleted because at no point do they have a model as parent or child with type_id = 5.
There are N levels in this hierarchy, but there are no circular relationship (ie. 1 -> 2; 2 -> 1 will not exist).
Any idea on how to do this?
Comments are dispersed throughout the code.
Schema:
CREATE TABLE `Models`
( -- Note that for now the AUTO_INC is ripped out of this for ease of data insertion
-- otherwise we lose control at this point (this is just a test)
-- `model_id` MEDIUMINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
`model_id` MEDIUMINT UNSIGNED PRIMARY KEY,
`type_id` SMALLINT UNSIGNED NOT NULL
)ENGINE = InnoDB;
CREATE TABLE `Model_Hierarchy`
( -- OP comments state these are more like components
--
-- #Drew imagine b being a product and a and c being two different ways to package it.
-- Hence b is contained in both a and c respectively and separately (ie. customer can buy
-- both a and c), however, any change (outside of the scope of this question) to b is
-- reflected to both a and c. `Model_Hierarchy can be altered, yes (the project is
-- in an early development). Max tree depth is unknown (this is for manufacturing...
-- so a component can consist of a component... that consist of further component etc.
-- no real limit). How many rows? Depends, but I don't expect it to exceed 2^32.
--
--
-- Drew's interpretation of the the above: `a` is a parent of `b`, `c` is a parent of `b`
--
`parent_id` MEDIUMINT UNSIGNED NOT NULL,
`child_id` MEDIUMINT UNSIGNED NOT NULL,
INDEX `fk_Model_Hierarchy_Models1_idx` (`parent_id` ASC),
INDEX `fk_Model_Hierarchy_Models2_idx` (`child_id` ASC),
PRIMARY KEY (`parent_id`, `child_id`),
key(`child_id`,`parent_id`), -- NoteA1 pair flipped the other way (see NoteA2 in stored proc)
CONSTRAINT `fk_Model_Hierarchy_Models1`
FOREIGN KEY (`parent_id`)
REFERENCES `Models` (`model_id`)
ON DELETE CASCADE
ON UPDATE NO ACTION,
CONSTRAINT `fk_Model_Hierarchy_Models2`
FOREIGN KEY (`child_id`)
REFERENCES `Models` (`model_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)ENGINE = InnoDB;
CREATE TABLE `GoodIds`
( -- a table to determine what not to delete from models
`id` int auto_increment primary key,
`model_id` MEDIUMINT UNSIGNED,
`has_been_processed` int not null,
dtFinished datetime null,
-- index section (none shown, developer chooses later, as he knows what is going on)
unique index(model_id), -- supports the "insert ignore" concept
-- FK's below:
foreign key `fk_abc_123` (model_id) references Models(model_id)
)ENGINE = InnoDB;
To drop and start over from the top:
-- ------------------------------------------------------------
-- reverse order is happier
drop table `GoodIds`;
drop table `Model_Hierarchy`;
drop table `Models`;
-- ------------------------------------------------------------
Load Test Data:
insert Models(model_id,type_id) values
(1,1),(2,1),(3,1),(4,1),(5,1),(6,1),(7,1),(8,1),(9,5),(10,1),(11,1),(12,1);
-- delete from Models; -- note, truncate does not work on parents of FK's
insert Model_Hierarchy(parent_id,child_id) values
(1,2),(1,3),(1,4),(1,5),
(2,1),(2,4),(2,7),
(3,2),
(4,8),(4,9),
(5,1),
(6,1),(6,2),
(7,1),(7,10),
(8,1),(8,12),
(9,11),
(10,11),
(11,12);
-- Set 2 to test (after a truncate / copy paste of this below to up above):
(1,2),(1,3),(1,4),(1,5),
(2,1),(2,4),(2,7),
(3,2),
(4,8),(4,9),
(5,1),
(6,1),(6,2),
(7,1),(7,10),
(8,1),(8,12),
(9,1),
(10,11),
(11,12);
-- truncate table Model_Hierarchy;
-- select * from Model_Hierarchy;
-- select * from Models where type_id=5;
Stored Procedure:
DROP PROCEDURE if exists loadUpGoodIds;
DELIMITER $$
CREATE PROCEDURE loadUpGoodIds()
BEGIN
DECLARE bDone BOOL DEFAULT FALSE;
DECLARE iSillyCounter int DEFAULT 0;
TRUNCATE TABLE GoodIds;
insert GoodIds(model_id,has_been_processed) select model_id,0 from Models where type_id=5;
WHILE bDone = FALSE DO
select min(model_id) into #the_Id_To_Process from GoodIds where has_been_processed=0;
IF #the_Id_To_Process is null THEN
SET bDone=TRUE;
ELSE
-- First, let's say this is the parent id.
-- Find the child id's that this is a parent of
-- and they qualify as A Good Id to save into our Good table
insert ignore GoodIds(model_id,has_been_processed,dtFinished)
select child_id,0,null
from Model_Hierarchy
where parent_id=#the_Id_To_Process;
-- Next, let's say this is the child id.
-- Find the parent id's that this is a child of
-- and they qualify as A Good Id to save into our Good table
insert ignore GoodIds(model_id,has_been_processed,dtFinished)
select child_id,0,null
from Model_Hierarchy
where child_id=#the_Id_To_Process;
-- NoteA2: see NoteA1 in schema
-- you can feel the need for the flipped pair composite key in the above
UPDATE GoodIds set has_been_processed=1,dtFinished=now() where model_id=#the_Id_To_Process;
END IF;
-- safety bailout during development:
SET iSillyCounter = iSillyCounter + 1;
IF iSillyCounter>10000 THEN
SET bDone=TRUE;
END IF;
END WHILE;
END$$
DELIMITER ;
Test:
call loadUpGoodIds();
-- select count(*) from GoodIds; -- 9 / 11 / 12
select * from GoodIds limit 10;
+----+----------+--------------------+---------------------+
| id | model_id | has_been_processed | dtFinished |
+----+----------+--------------------+---------------------+
| 1 | 9 | 1 | 2016-06-28 20:33:16 |
| 2 | 11 | 1 | 2016-06-28 20:33:16 |
| 4 | 12 | 1 | 2016-06-28 20:33:16 |
+----+----------+--------------------+---------------------+
Mop up calls, can be folded into stored proc:
-- The below is what to run
-- delete from Models where model_id not in (select null); -- this is a safe call (will never do anything)
-- the above is just a null test
delete from Models where model_id not in (select model_id from GoodIds);
-- Error 1451: Cannot delete or update a parent row: a FK constraint is unhappy
-- hey the cascades did not work, can figure that out later
-- Let go bottom up for now. Meaning, to honor FK constraints, kill bottom up.
delete from Model_Hierarchy where parent_id not in (select model_id from GoodIds);
-- 18 rows deleted
delete from Model_Hierarchy where child_id not in (select model_id from GoodIds);
-- 0 rows deleted
delete from Models where model_id not in (select model_id from GoodIds);
-- 9 rows deleted / 3 remain
select * from Models;
+----------+---------+
| model_id | type_id |
+----------+---------+
| 9 | 5 |
| 11 | 1 |
| 12 | 1 |
+----------+---------+

MySql unique index over two colums - auto increment [duplicate]

I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;

mysql two column primary key with auto-increment

I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;