I made a minimal example to describe my problem using the concept of player using items in a game (which makes my problem easier to understand).
My problematic is the following. Let's say I have to store items and the player holding them in my database.
I didn't show it on the image but a player could hold no item but it's really important for my problem.
I translated that into :
Which gives the two following tables (with PK_ and #FK) :
In my example I can then fill them as such :
By doing :
Now, I want any player to have a "favorite" item, so I want to add a foreign key #item_id in the table Player but I want to make sure this new value refers to an item being hold by the right player in the Item table. How can I add a (check?) constraint to my table declarations so that condition is always true to ensure data integrity between my two tables? I don't think I can solve this by creating a third table since it's still a 1n cardinality. Also I don't want to ALTER the table I want to CREATE, since my database is not yet to be deployed.
You can add a third table which links the player and item table for the favorite item for each player (if you don't want a cycle reference between player and item). There are two restrictions to solve:
A player must own the item, they want to have as a favorite.
A player can have only one favorite item.
The first point can be solved with a multi-column foreign key. The table item needs an index for the playerId and the id of the item. Then your new table for the favorite items can reference this index as a foreign key. The inclusion of the playerId in the foreign key ensures that the item isn't "moved" to a different player while it is marked as favorite. The queries will look like this:
CREATE TABLE item(
id INT AUTO_INCREMENT PRIMARY KEY NOT NULL,
playerId INT NOT NULL,
INDEX item_id_and_playerId (playerId, id)
);
CREATE TABLE favItem(
id INT AUTO_INCREMENT PRIMARY KEY NOT NULL,
playerId INT NOT NULL,
itemId INT NOT NULL,
FOREIGN KEY favItem_FQ_id_and_playerId (playerId, itemId) REFERENCES item(playerId, id)
);
The second point can be solved by a simple UNIQUE constraint on the playerId column of the new favorite items table. That way only one item can be marked as favorite. We adjust the CREATE TABLE query for the favItem table as follow:
CREATE TABLE favItem(
id INT AUTO_INCREMENT PRIMARY KEY NOT NULL,
playerId INT NOT NULL,
itemId INT NOT NULL,
FOREIGN KEY favItem_FQ_id_and_playerId (playerId, itemId) REFERENCES item(playerId, id),
CONSTRAINT favItem_UQ_playerId UNIQUE (playerId)
);
See the following queries how they work:
mysql> SELECT * FROM player;
+----+
| id |
+----+
| 1 |
| 2 |
| 3 |
+----+
3 rows in set (0.00 sec)
mysql> SELECT * FROM item;
+----+----------+
| id | playerId |
+----+----------+
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| 4 | 3 |
| 5 | 3 |
| 6 | 3 |
+----+----------+
6 rows in set (0.00 sec)
mysql> INSERT INTO favItem(playerId, itemId) VALUES (1, 2);
Query OK, 1 row affected (0.01 sec)
mysql> INSERT INTO favItem(playerId, itemId) VALUES (3, 2);
ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails
(`test`.`favItem`, CONSTRAINT `favItem_ibfk_1` FOREIGN KEY
(`playerId`, `itemId`) REFERENCES `item` (`playerId`, `id`))
mysql> INSERT INTO favItem(playerId, itemId) VALUES (1, 1);
ERROR 1062 (23000): Duplicate entry '1' for key 'favItem.favItem_UQ_playerId'
The first row can be added without problem. However, the second row can't be added because the player with the id playerId=3 does not "own" the item with the id itemId=2. The third row can't be added either because the player with the id playerId=1 cannot mark the item with the id itemId=1 (which he owns) as a favorite because of the UNIQUE constraint on playerId.
You can add an isFavorite column in the Item table. It's value is either NULL or a predefined value, like a ENUM value. Then you can add a UNIQUE constraint over the two columns playerId and isFavorite. That way, a player can have only one item as favorite, since multiple rows with the same playerId and isFavorite value would result in a unique constraint error. The table can look like this:
CREATE TABLE items(
id INT AUTO_INCREMENT PRIMARY KEY NOT NULL,
playerId INT NOT NULL,
isFavorite ENUM('FAV') NULL,
CONSTRAINT items_UQ_fav UNIQUE (PlayerId, isFavorite)
);
Check the following queries how a new row would validate the unique constraint:
mysql> INSERT INTO
items(playerId, isFavorite)
VALUES
(4, NULL), (7, NULL), (7, 'FAV'), (5, 'FAV');
Query OK, 4 rows affected (0.02 sec)
Records: 4 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM items;
+----+----------+------------+
| id | playerId | isFavorite |
+----+----------+------------+
| 1 | 4 | NULL |
| 4 | 5 | FAV |
| 2 | 7 | NULL |
| 3 | 7 | FAV |
+----+----------+------------+
4 rows in set (0.00 sec)
mysql> INSERT INTO items(playerId, isFavorite) VALUES (5, 'FAV');
ERROR 1062 (23000): Duplicate entry '5-FAV' for key 'items.items_UQ_fav'
Related
I have a table, Models that consists of these (relevant) attributes:
-- -----------------------------------------------------
-- Table `someDB`.`Models`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `someDB`.`Models` (
`model_id` MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT,
`type_id` SMALLINT UNSIGNED NOT NULL,
-- someOtherAttributes
PRIMARY KEY (`model_id`),
ENGINE = InnoDB;
+---------+---------+
| model_id| type_id |
+---------+---------+
| 1 | 4 |
| 2 | 4 |
| 3 | 5 |
| 4 | 3 |
+---------+---------+
And table Model_Hierarchy that shows the parent & child relationship (again, showing only the relevant attributes):
-- -----------------------------------------------------
-- Table `someDB`.`Model_Hierarchy`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `someDB`.`Model_Hierarchy` (
`parent_id` MEDIUMINT UNSIGNED NOT NULL,
`child_id` MEDIUMINT UNSIGNED NOT NULL,
-- someOtherAttributes,
INDEX `fk_Model_Hierarchy_Models1_idx` (`parent_id` ASC),
INDEX `fk_Model_Hierarchy_Models2_idx` (`child_id` ASC),
PRIMARY KEY (`parent_id`, `child_id`),
CONSTRAINT `fk_Model_Hierarchy_Models1`
FOREIGN KEY (`parent_id`)
REFERENCES `someDB`.`Models` (`model_id`)
ON DELETE CASCADE
ON UPDATE NO ACTION,
CONSTRAINT `fk_Model_Hierarchy_Models2`
FOREIGN KEY (`child_id`)
REFERENCES `someDB`.`Models` (`model_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
+-----------+----------+
| parent_id | child_id |
+-----------+----------+
| 1 | 2 |
| 2 | 4 |
| 3 | 4 |
+-----------+----------+
If there is a Model that is not a parent or child (at some point) of another Model whose type is 5, it is not valid and hence should be deleted.
This means that Model 1, 2 should be deleted because at no point do they have a model as parent or child with type_id = 5.
There are N levels in this hierarchy, but there are no circular relationship (ie. 1 -> 2; 2 -> 1 will not exist).
Any idea on how to do this?
Comments are dispersed throughout the code.
Schema:
CREATE TABLE `Models`
( -- Note that for now the AUTO_INC is ripped out of this for ease of data insertion
-- otherwise we lose control at this point (this is just a test)
-- `model_id` MEDIUMINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
`model_id` MEDIUMINT UNSIGNED PRIMARY KEY,
`type_id` SMALLINT UNSIGNED NOT NULL
)ENGINE = InnoDB;
CREATE TABLE `Model_Hierarchy`
( -- OP comments state these are more like components
--
-- #Drew imagine b being a product and a and c being two different ways to package it.
-- Hence b is contained in both a and c respectively and separately (ie. customer can buy
-- both a and c), however, any change (outside of the scope of this question) to b is
-- reflected to both a and c. `Model_Hierarchy can be altered, yes (the project is
-- in an early development). Max tree depth is unknown (this is for manufacturing...
-- so a component can consist of a component... that consist of further component etc.
-- no real limit). How many rows? Depends, but I don't expect it to exceed 2^32.
--
--
-- Drew's interpretation of the the above: `a` is a parent of `b`, `c` is a parent of `b`
--
`parent_id` MEDIUMINT UNSIGNED NOT NULL,
`child_id` MEDIUMINT UNSIGNED NOT NULL,
INDEX `fk_Model_Hierarchy_Models1_idx` (`parent_id` ASC),
INDEX `fk_Model_Hierarchy_Models2_idx` (`child_id` ASC),
PRIMARY KEY (`parent_id`, `child_id`),
key(`child_id`,`parent_id`), -- NoteA1 pair flipped the other way (see NoteA2 in stored proc)
CONSTRAINT `fk_Model_Hierarchy_Models1`
FOREIGN KEY (`parent_id`)
REFERENCES `Models` (`model_id`)
ON DELETE CASCADE
ON UPDATE NO ACTION,
CONSTRAINT `fk_Model_Hierarchy_Models2`
FOREIGN KEY (`child_id`)
REFERENCES `Models` (`model_id`)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)ENGINE = InnoDB;
CREATE TABLE `GoodIds`
( -- a table to determine what not to delete from models
`id` int auto_increment primary key,
`model_id` MEDIUMINT UNSIGNED,
`has_been_processed` int not null,
dtFinished datetime null,
-- index section (none shown, developer chooses later, as he knows what is going on)
unique index(model_id), -- supports the "insert ignore" concept
-- FK's below:
foreign key `fk_abc_123` (model_id) references Models(model_id)
)ENGINE = InnoDB;
To drop and start over from the top:
-- ------------------------------------------------------------
-- reverse order is happier
drop table `GoodIds`;
drop table `Model_Hierarchy`;
drop table `Models`;
-- ------------------------------------------------------------
Load Test Data:
insert Models(model_id,type_id) values
(1,1),(2,1),(3,1),(4,1),(5,1),(6,1),(7,1),(8,1),(9,5),(10,1),(11,1),(12,1);
-- delete from Models; -- note, truncate does not work on parents of FK's
insert Model_Hierarchy(parent_id,child_id) values
(1,2),(1,3),(1,4),(1,5),
(2,1),(2,4),(2,7),
(3,2),
(4,8),(4,9),
(5,1),
(6,1),(6,2),
(7,1),(7,10),
(8,1),(8,12),
(9,11),
(10,11),
(11,12);
-- Set 2 to test (after a truncate / copy paste of this below to up above):
(1,2),(1,3),(1,4),(1,5),
(2,1),(2,4),(2,7),
(3,2),
(4,8),(4,9),
(5,1),
(6,1),(6,2),
(7,1),(7,10),
(8,1),(8,12),
(9,1),
(10,11),
(11,12);
-- truncate table Model_Hierarchy;
-- select * from Model_Hierarchy;
-- select * from Models where type_id=5;
Stored Procedure:
DROP PROCEDURE if exists loadUpGoodIds;
DELIMITER $$
CREATE PROCEDURE loadUpGoodIds()
BEGIN
DECLARE bDone BOOL DEFAULT FALSE;
DECLARE iSillyCounter int DEFAULT 0;
TRUNCATE TABLE GoodIds;
insert GoodIds(model_id,has_been_processed) select model_id,0 from Models where type_id=5;
WHILE bDone = FALSE DO
select min(model_id) into #the_Id_To_Process from GoodIds where has_been_processed=0;
IF #the_Id_To_Process is null THEN
SET bDone=TRUE;
ELSE
-- First, let's say this is the parent id.
-- Find the child id's that this is a parent of
-- and they qualify as A Good Id to save into our Good table
insert ignore GoodIds(model_id,has_been_processed,dtFinished)
select child_id,0,null
from Model_Hierarchy
where parent_id=#the_Id_To_Process;
-- Next, let's say this is the child id.
-- Find the parent id's that this is a child of
-- and they qualify as A Good Id to save into our Good table
insert ignore GoodIds(model_id,has_been_processed,dtFinished)
select child_id,0,null
from Model_Hierarchy
where child_id=#the_Id_To_Process;
-- NoteA2: see NoteA1 in schema
-- you can feel the need for the flipped pair composite key in the above
UPDATE GoodIds set has_been_processed=1,dtFinished=now() where model_id=#the_Id_To_Process;
END IF;
-- safety bailout during development:
SET iSillyCounter = iSillyCounter + 1;
IF iSillyCounter>10000 THEN
SET bDone=TRUE;
END IF;
END WHILE;
END$$
DELIMITER ;
Test:
call loadUpGoodIds();
-- select count(*) from GoodIds; -- 9 / 11 / 12
select * from GoodIds limit 10;
+----+----------+--------------------+---------------------+
| id | model_id | has_been_processed | dtFinished |
+----+----------+--------------------+---------------------+
| 1 | 9 | 1 | 2016-06-28 20:33:16 |
| 2 | 11 | 1 | 2016-06-28 20:33:16 |
| 4 | 12 | 1 | 2016-06-28 20:33:16 |
+----+----------+--------------------+---------------------+
Mop up calls, can be folded into stored proc:
-- The below is what to run
-- delete from Models where model_id not in (select null); -- this is a safe call (will never do anything)
-- the above is just a null test
delete from Models where model_id not in (select model_id from GoodIds);
-- Error 1451: Cannot delete or update a parent row: a FK constraint is unhappy
-- hey the cascades did not work, can figure that out later
-- Let go bottom up for now. Meaning, to honor FK constraints, kill bottom up.
delete from Model_Hierarchy where parent_id not in (select model_id from GoodIds);
-- 18 rows deleted
delete from Model_Hierarchy where child_id not in (select model_id from GoodIds);
-- 0 rows deleted
delete from Models where model_id not in (select model_id from GoodIds);
-- 9 rows deleted / 3 remain
select * from Models;
+----------+---------+
| model_id | type_id |
+----------+---------+
| 9 | 5 |
| 11 | 1 |
| 12 | 1 |
+----------+---------+
In MySQL database I have the data
create table student(user_id int(20) unsigned NOT NULL AUTO_INCREMENT,
name varchar(20),
email varchar(20),
primary key(user_id,email));
I want to use user_id and email field should be unique value.
I also add
alter table student add unique unique_index (user_id,email);
but still it accepts all the entry
mysql> insert into student (name, email) values ("a", "aa", "aa");
Query OK, 1 row affected (0.06 sec)
mysql> insert into student (name, email) values ("a", "aa", "aa");
Query OK, 1 row affected (0.06 sec)
select * from student;
+---------+------+-------+
| user_id | name | email |
+---------+------+-------+
| 1 | a | aa |
| 2 | a | aa |
+---------+------+-------+
2 rows in set (0.00 sec)
how can I make that fields(id and email) as unique?
you can use like these. Here every unique index name should be different.
CREATE TABLE student(user_id INT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
NAME VARCHAR(20),
email VARCHAR(20),
PRIMARY KEY(user_id,email));
unique index creation.
ALTER TABLE student ADD UNIQUE unique_index1 (user_id);
ALTER TABLE student ADD UNIQUE unique_index2 (email);
Inserting data:
INSERT INTO student (`name`, email) VALUES ("a", "aa");
Execute below SQL it gives error.
INSERT INTO student (`name`, email) VALUES ("a", "aa");
Thank you.
As my understandind of your requirement is correct, you have to add to unique indexes:
alter table student add unique unique_index (user_id);
alter table student add unique unique_index_email (email);
Your statement will create a index where the combination of user_id and email is unique!
Let's look at this example database:
As we can see, person depends on the city (person.city_id is a foreign key). I don't delete rows, I just set them inactive (active=0). After setting city inactive, how can I automatically set all persons who are dependent on this city inactive? Is there a better way than writing triggers?
EDIT: I am interested only in setting person's rows inactive, not setting them active.
Here's a solution that uses cascading foreign keys to do what you describe:
mysql> create table city (
id int not null auto_increment,
name varchar(45),
active tinyint,
primary key (id),
unique key (id, active));
mysql> create table person (
id int not null auto_increment,
city_id int,
active tinyint,
primary key (id),
foreign key (city_id, active) references city (id, active) on update cascade);
mysql> insert into city (name, active) values ('New York', 1);
mysql> insert into person (city_id, active) values (1, 1);
mysql> select * from person;
+----+---------+--------+
| id | city_id | active |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
mysql> update city set active = 0 where id = 1;
mysql> select * from person;
+----+---------+--------+
| id | city_id | active |
+----+---------+--------+
| 1 | 1 | 0 |
+----+---------+--------+
Tested on MySQL 5.5.31.
Maybe you should reconsider how you define a person to be active.. Instead of having active defined twice, you should just keep it in the city table and have your SELECT statements return Person WHERE city.active = 1..
But if you must.. you could do something like:
UPDATE city C
LEFT JOIN person P ON C.id = P.city
SET C.active = 0 AND P.active = 0
WHERE C.id = #id
I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;
I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;