This question already has answers here:
mysql two column primary key with auto-increment
(4 answers)
Closed 9 years ago.
I'm trying to do the composite key with one of them auto incrementing, but when I try to enter a new row it just continue the sequential.
Here's the example of what happens:
Item_1 | Item_2
1 | 1
1 | 2
2 | 3
2 | 4
2 | 5
Here's the example of what I want:
Item_1 | Item_2
1 | 1
1 | 2
2 | 1
2 | 2
2 | 3
I create the table this way:
CREATE TABLE IF NOT EXISTS `usuarios` (
`cod_user` int(11) NOT NULL AUTO_INCREMENT,
`cod_user_emp` int(11) NOT NULL,
PRIMARY KEY (`cod_user`,`cod_user_emp`),
UNIQUE KEY `user` (`user`),
KEY `cod_user` (`cod_user`)
);
Edit
I resolved the problem doing a server sided php validation.
$result = $db->query("SELECT * FROM usuarios WHERE cod_user_emp=\"$emp\" ORDER BY cod_user DESC LIMIT 1");
while($row=$result->fetch_array()){
$cod2 = $row['cod_user']+1;
}
Remove that AUTO_INCREMENT column,
CREATE TABLE IF NOT EXISTS `usuarios`
(
`cod_user` int(11) NOT NULL,
`cod_user_emp` int(11) NOT NULL,
PRIMARY KEY (`cod_user`,`cod_user_emp`) -- <<== this is enough
);
And can create a Stored Procedure that increments Item_2 for every Item_1.
DELIMITER $$
CREATE PROCEDURE InsertRecord(IN ItemA INT)
BEGIN
SET #max_id = (
SELECT COALESCE(MAX(Item_2), 0) + 1
FROM TableName
WHERE Item_1 = ItemA
);
INSERT INTO tableName(Item_1, Item_2)
VALUES(ItemA, #max_id)
END $$
DELIMITER ;
and call it like this,
CALL InsertRecord(2);
Related
I am in this situation:
Background
I have 2 database schemas called "prod" and "stg".
"prod" contains 2 tables called "parent" and "child"
"stg" only has the "parent" table
"parent" table defination is the same across "prod" and "stg" schemas.
In the case of deleting records, "parent" table is defined as soft delete (logically deletion, i.e. set delete_flg as "1") whereas the "child" table is true delete (physically remove the record)
Goal
I am trying to achieve the following goal:
when and only when both "prod"."parent" and "stg"."parent" are deleted (no matter physically or logically, or does not exist on one side) then automatically cascade a delete operation(physically remove) to the record in "prod"."child" table whose "SP_ID" matches the value in "parent".
For example, assuming I have
"prod"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
"prod"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
"stg"."parent"
+----+---------+--------+
| SP_ID | SP_NAME | DELETE_FLG |
+----+---------+--------+
| 1 | 1 | 0 |
+----+---------+--------+
"prod"."child"
+----+---------+
| SP_ID | JOB_KEY |
+----+---------+
| 1 | key |
+----+---------+
, if I execute a sql update "stg"."parent" set DELETE_FLG = 1 where SP_ID = 1, which logically delete the last "existing" record in "parent" table that has SP_ID 1, then the record in "prod"."child" will also be automatically phycially deleted by mysql.
Question
I have been thinking about making the SP_ID in the child table as a foreign key referencing the one in parent able (https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html)
however,
a) I don't know whether it is possible to reference multiple tables in differnet schemas, and
b) It seems mysql only support cascading same operation, i.e. delete on parent then delete the child OR update on parent then update the child. But in my case, I want a update on parent then delete the child.
Could somebody help me out here please?
Is this possible to be achieved in mysql? or I have to do this in application layer?
Table definition
CREATE TABLE `prod`.`parent` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`SP_NAME` varchar(100) NOT NULL COMMENT '',
`DELETE_FLG` tinyint(1) NOT NULL DEFAULT '0' COMMENT '',
PRIMARY KEY (`SP_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
CREATE TABLE `prod`.`child` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`JOB_KEY` varchar(11) NOT NULL,
PRIMARY KEY (`SP_ID`,`JOB_KEY`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
CREATE TABLE `stg`.`parent` (
`SP_ID` varchar(20) NOT NULL COMMENT '',
`SP_NAME` varchar(100) NOT NULL COMMENT '',
`DELETE_FLG` tinyint(1) NOT NULL DEFAULT '0' COMMENT '',
PRIMARY KEY (`SP_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT=''
with the hint of using triggers, This is my solution which works:
Add 2 triggers(one is after update, on is after delete) to both prod and stg parent tables.
# after update trigger
CREATE DEFINER=`root`#`localhost` TRIGGER `stg`.`parent_AFTER_UPDATE` AFTER UPDATE ON `parent` FOR EACH ROW
BEGIN
IF (
select
count(*)
from(
select
*
from `prod`.`parent`
where `prod`.`parent`.id = old.id and `prod`.`parent`.delete_flg = 0
Union all
select
*
from `stg`.`parent`
where `stg`.`parent`.id = old.id and `stg`.`parent`.delete_flg = 0
) as a
) = 0 THEN
DELETE FROM `prod`.`child` WHERE `prod`.`child`.id = old.id;
END IF;
END
# after delete trigger
CREATE DEFINER=`root`#`localhost` TRIGGER `stg`.`parent_AFTER_DELETE` AFTER DELETE ON `parent` FOR EACH ROW
BEGIN
IF (
select
count(*)
from(
select
*
from `prod`.`parent`
where `prod`.`parent`.id = old.id and `prod`.`parent`.delete_flg = 0
Union all
select
*
from `stg`.`parent`
where `stg`.`parent`.id = old.id and `stg`.`parent`.delete_flg = 0
) as a
) = 0 THEN
DELETE FROM `prod`.`child` WHERE `prod`.`child`.id = old.id;
END IF;
END
Is it possible to call a while statement inside a SELECT clause in MySQL ?
Here is a example of what I want to do :
CREATE TABLE `item` (
`id` int,
`parentId` int,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`),
KEY `FK_parentId` (`parentId`),
CONSTRAINT `FK_parentId` FOREIGN KEY (`parentId`) REFERENCES `item` (`id`)
);
I would like to select the root of each item, i.e. the higher ancestor (the item that has no parentId). In my mind, I would do something like this :
select
`id` as 'ID',
while `parentId` is not null do `id` = `parentId` end while as 'Root ID'
from
`item`
Of course this can't work. What is the better way to achieve something like that ?
EDIT
Here a sample data :
id | parentId
1 | NULL
2 | 1
3 | 2
4 | 2
5 | 3
6 | NULL
7 | 6
8 | 7
9 | 7
And expected result :
ID | RootId
1 | NULL
2 | 1
3 | 1
4 | 1
5 | 1
6 | NULL
7 | 6
8 | 6
9 | 6
Thank you.
just use CASE
select
`id` as 'ID',
CASE `parentId` WHEN is not null THEN `parentId` END as 'Root ID'
from
`item`
Here is the procedure:
BEGIN
-- declare variables
DECLARE cursor_ID INT;
DECLARE cursor_PARENTID INT;
DECLARE done BOOLEAN DEFAULT FALSE;
-- declare cursor
DECLARE cursor_item CURSOR FOR SELECT id, parentId FROM item;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
-- create a temporary table
create TEMPORARY table IF NOT EXISTS temp_table as (select id, parentId from item);
truncate table temp_table;
OPEN cursor_item;
item_loop: LOOP
-- fetch row through cursor
FETCH cursor_item INTO cursor_ID, cursor_PARENTID;
IF done THEN
-- end loop if cursor is empty
LEAVE item_loop;
END IF;
-- insert into
insert into temp_table
select MAX(t.id) id, MIN(#pv := t.parentId) parentId
from (select * from item order by id desc) t
join (select #pv := cursor_ID) tmp
where t.id = #pv;
END LOOP;
-- close cursor
CLOSE cursor_item;
-- get the results
SELECT id id, parentid RootId from temp_table order by id ASC;
END
I created a temporary table and kept the results into it while running cursor. I couldn't think of a solution with just one query. I had to go for a cursor.
I took help from the following links:
How to do the Recursive SELECT query in MySQL?
How to create a MySQL hierarchical recursive query
I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;
I have a MySQL table with two fields as primary key (ID & Account), ID has AUTO_INCREMENT.
This results in the following MySQL table:
ID | Account
------------------
1 | 1
2 | 1
3 | 2
4 | 3
However, I expected the following result (restart AUTO_INCREMENT for each Account):
ID | Account
------------------
1 | 1
2 | 1
1 | 2
1 | 3
What is wrong in my configuration? How can I fix this?
Thanks!
Functionality you're describing is possible only with MyISAM engine. You need to specify the CREATE TABLE statement like this:
CREATE TABLE your_table (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
account_id INT UNSIGNED NOT NULL,
PRIMARY KEY(account_id, id)
) ENGINE = MyISAM;
If you use an innoDB engine, you can use a trigger like this:
CREATE TRIGGER `your_table_before_ins_trig` BEFORE INSERT ON `your_table`
FOR EACH ROW
begin
declare next_id int unsigned default 1;
-- get the next ID for your Account Number
select max(ID) + 1 into next_id from your_table where Account = new.Account;
-- if there is no Account number yet, set the ID to 1 by default
IF next_id IS NULL THEN SET next_id = 1; END IF;
set new.ID= next_id;
end#
Note ! your delimiter column is # in the sql statement above !
This solution works for a table like yours if you create it without any auto_increment functionality like this:
CREATE TABLE IF NOT EXISTS `your_table` (
`ID` int(11) NOT NULL,
`Account` int(11) NOT NULL,
PRIMARY KEY (`ID`,`Account`)
);
Now you can insert your values like this:
INSERT INTO your_table (`Account`) VALUES (1);
INSERT INTO your_table (`Account`, `ID`) VALUES (1, 5);
INSERT INTO your_table (`Account`) VALUES (2);
INSERT INTO your_table (`Account`, `ID`) VALUES (3, 10205);
It will result in this:
ID | Account
------------------
1 | 1
2 | 1
1 | 2
1 | 3
I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;