Two primary keys & auto increment - mysql

I have a MySQL table with two fields as primary key (ID & Account), ID has AUTO_INCREMENT.
This results in the following MySQL table:
ID | Account
------------------
1 | 1
2 | 1
3 | 2
4 | 3
However, I expected the following result (restart AUTO_INCREMENT for each Account):
ID | Account
------------------
1 | 1
2 | 1
1 | 2
1 | 3
What is wrong in my configuration? How can I fix this?
Thanks!

Functionality you're describing is possible only with MyISAM engine. You need to specify the CREATE TABLE statement like this:
CREATE TABLE your_table (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
account_id INT UNSIGNED NOT NULL,
PRIMARY KEY(account_id, id)
) ENGINE = MyISAM;

If you use an innoDB engine, you can use a trigger like this:
CREATE TRIGGER `your_table_before_ins_trig` BEFORE INSERT ON `your_table`
FOR EACH ROW
begin
declare next_id int unsigned default 1;
-- get the next ID for your Account Number
select max(ID) + 1 into next_id from your_table where Account = new.Account;
-- if there is no Account number yet, set the ID to 1 by default
IF next_id IS NULL THEN SET next_id = 1; END IF;
set new.ID= next_id;
end#
Note ! your delimiter column is # in the sql statement above !
This solution works for a table like yours if you create it without any auto_increment functionality like this:
CREATE TABLE IF NOT EXISTS `your_table` (
`ID` int(11) NOT NULL,
`Account` int(11) NOT NULL,
PRIMARY KEY (`ID`,`Account`)
);
Now you can insert your values like this:
INSERT INTO your_table (`Account`) VALUES (1);
INSERT INTO your_table (`Account`, `ID`) VALUES (1, 5);
INSERT INTO your_table (`Account`) VALUES (2);
INSERT INTO your_table (`Account`, `ID`) VALUES (3, 10205);
It will result in this:
ID | Account
------------------
1 | 1
2 | 1
1 | 2
1 | 3

Related

MySQL - While in SELECT clause

Is it possible to call a while statement inside a SELECT clause in MySQL ?
Here is a example of what I want to do :
CREATE TABLE `item` (
`id` int,
`parentId` int,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`),
KEY `FK_parentId` (`parentId`),
CONSTRAINT `FK_parentId` FOREIGN KEY (`parentId`) REFERENCES `item` (`id`)
);
I would like to select the root of each item, i.e. the higher ancestor (the item that has no parentId). In my mind, I would do something like this :
select
`id` as 'ID',
while `parentId` is not null do `id` = `parentId` end while as 'Root ID'
from
`item`
Of course this can't work. What is the better way to achieve something like that ?
EDIT
Here a sample data :
id | parentId
1 | NULL
2 | 1
3 | 2
4 | 2
5 | 3
6 | NULL
7 | 6
8 | 7
9 | 7
And expected result :
ID | RootId
1 | NULL
2 | 1
3 | 1
4 | 1
5 | 1
6 | NULL
7 | 6
8 | 6
9 | 6
Thank you.
just use CASE
select
`id` as 'ID',
CASE `parentId` WHEN is not null THEN `parentId` END as 'Root ID'
from
`item`
Here is the procedure:
BEGIN
-- declare variables
DECLARE cursor_ID INT;
DECLARE cursor_PARENTID INT;
DECLARE done BOOLEAN DEFAULT FALSE;
-- declare cursor
DECLARE cursor_item CURSOR FOR SELECT id, parentId FROM item;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
-- create a temporary table
create TEMPORARY table IF NOT EXISTS temp_table as (select id, parentId from item);
truncate table temp_table;
OPEN cursor_item;
item_loop: LOOP
-- fetch row through cursor
FETCH cursor_item INTO cursor_ID, cursor_PARENTID;
IF done THEN
-- end loop if cursor is empty
LEAVE item_loop;
END IF;
-- insert into
insert into temp_table
select MAX(t.id) id, MIN(#pv := t.parentId) parentId
from (select * from item order by id desc) t
join (select #pv := cursor_ID) tmp
where t.id = #pv;
END LOOP;
-- close cursor
CLOSE cursor_item;
-- get the results
SELECT id id, parentid RootId from temp_table order by id ASC;
END
I created a temporary table and kept the results into it while running cursor. I couldn't think of a solution with just one query. I had to go for a cursor.
I took help from the following links:
How to do the Recursive SELECT query in MySQL?
How to create a MySQL hierarchical recursive query

Select and insert into a table in mysql

Mysql table
create table table1(
id int(3) zerofill auto_increment primary key,
username varchar(10)
)
engine=innodb;
Mysql insert query
insert into table1 (username)
select id from (select id from table1) as a where
a.id=last_insert_id();
I am trying to insert into a table by selecting the last id from the same table and the same row,the above queries give the explanation of what i want to do.The insert query gives null value in both the id and username.
The expected results is below.
id username
001 001
002 002
003 003
A possible approach
INSERT INTO table1 (username)
SELECT LPAD(COALESCE(MAX(id), 0) + 1, 3, '0')
FROM table1
Here is SQLFiddle demo
A drawback of this approach is that under heavy load different concurrent users may get the same MAX(id) and you'll end up with rows that have different ids but the same username.
Now, the more precise way to do it involves a separate sequencing table and a BEFORE INSERT triger
Proposed changed table schema
CREATE TABLE table1_seq
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
);
CREATE TABLE table1
(
id INT(3) ZEROFILL PRIMARY KEY DEFAULT 0,
username VARCHAR(10)
);
The trigger
DELIMITER $$
CREATE TRIGGER tg_table1_before_insert
BEFORE INSERT ON table1
FOR EACH ROW
BEGIN
INSERT INTO table1_seq VALUES(NULL);
SET NEW.id = LAST_INSERT_ID(), NEW.username = LPAD(NEW.id, 3, '0');
END$$
DELIMITER ;
Now you just insert new rows into table1 like this
INSERT INTO table1 (username)
VALUES (NULL), (NULL)
Outcome:
| ID | USERNAME |
-----------------
| 1 | 001 |
| 2 | 002 |
Here is SQLFiddle demo
Why store the value at all?
CREATE TABLE table1 (
id int(3) zerofill auto_increment PRIMARY KEY
);
CREATE VIEW oh_look_username
AS
SELECT id
, LPad(Cast(id As varchar(10)), 3, '0') As username
FROM table1

How to make MySQL table primary key auto increment with some prefix

I have table like this
table
id Varchar(45) NOT NULL AUTO_INCREMENT PRIMARY KEY,
name CHAR(30) NOT NULL,
I want to increment my id field like 'LHPL001','LHPL002','LHPL003'... etc.
What should I have to do for that? Please let me know any possible way.
If you really need this you can achieve your goal with help of separate table for sequencing (if you don't mind) and a trigger.
Tables
CREATE TABLE table1_seq
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
);
CREATE TABLE table1
(
id VARCHAR(7) NOT NULL PRIMARY KEY DEFAULT '0', name VARCHAR(30)
);
Now the trigger
DELIMITER $$
CREATE TRIGGER tg_table1_insert
BEFORE INSERT ON table1
FOR EACH ROW
BEGIN
INSERT INTO table1_seq VALUES (NULL);
SET NEW.id = CONCAT('LHPL', LPAD(LAST_INSERT_ID(), 3, '0'));
END$$
DELIMITER ;
Then you just insert rows to table1
INSERT INTO Table1 (name)
VALUES ('Jhon'), ('Mark');
And you'll have
| ID | NAME |
------------------
| LHPL001 | Jhon |
| LHPL002 | Mark |
Here is SQLFiddle demo
Create a table with a normal numeric auto_increment ID, but either define it with ZEROFILL, or use LPAD to add zeroes when selecting. Then CONCAT the values to get your intended behavior. Example #1:
create table so (
id int(3) unsigned zerofill not null auto_increment primary key,
name varchar(30) not null
);
insert into so set name = 'John';
insert into so set name = 'Mark';
select concat('LHPL', id) as id, name from so;
+---------+------+
| id | name |
+---------+------+
| LHPL001 | John |
| LHPL002 | Mark |
+---------+------+
Example #2:
create table so (
id int unsigned not null auto_increment primary key,
name varchar(30) not null
);
insert into so set name = 'John';
insert into so set name = 'Mark';
select concat('LHPL', LPAD(id, 3, 0)) as id, name from so;
+---------+------+
| id | name |
+---------+------+
| LHPL001 | John |
| LHPL002 | Mark |
+---------+------+
I know it is late but I just want to share on what I have done for this. I'm not allowed to add another table or trigger so I need to generate it in a single query upon insert. For your case, can you try this query.
CREATE TABLE YOURTABLE(
IDNUMBER VARCHAR(7) NOT NULL PRIMARY KEY,
ENAME VARCHAR(30) not null
);
Perform a select and use this select query and save to the parameter #IDNUMBER
(SELECT IFNULL
(CONCAT('LHPL',LPAD(
(SUBSTRING_INDEX
(MAX(`IDNUMBER`), 'LHPL',-1) + 1), 5, '0')), 'LHPL001')
AS 'IDNUMBER' FROM YOURTABLE ORDER BY `IDNUMBER` ASC)
And then Insert query will be :
INSERT INTO YOURTABLE(IDNUMBER, ENAME) VALUES
(#IDNUMBER, 'EMPLOYEE NAME');
The result will be the same as the other answer but the difference is, you will not need to create another table or trigger. I hope that I can help someone that have a same case as mine.
Here is PostgreSQL example without trigger if someone need it on PostgreSQL:
CREATE SEQUENCE messages_seq;
CREATE TABLE IF NOT EXISTS messages (
id CHAR(20) NOT NULL DEFAULT ('message_' || nextval('messages_seq')),
name CHAR(30) NOT NULL,
);
ALTER SEQUENCE messages_seq OWNED BY messages.id;

MySql unique index over two colums - auto increment [duplicate]

I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;

mysql two column primary key with auto-increment

I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;