Let's look at this example database:
As we can see, person depends on the city (person.city_id is a foreign key). I don't delete rows, I just set them inactive (active=0). After setting city inactive, how can I automatically set all persons who are dependent on this city inactive? Is there a better way than writing triggers?
EDIT: I am interested only in setting person's rows inactive, not setting them active.
Here's a solution that uses cascading foreign keys to do what you describe:
mysql> create table city (
id int not null auto_increment,
name varchar(45),
active tinyint,
primary key (id),
unique key (id, active));
mysql> create table person (
id int not null auto_increment,
city_id int,
active tinyint,
primary key (id),
foreign key (city_id, active) references city (id, active) on update cascade);
mysql> insert into city (name, active) values ('New York', 1);
mysql> insert into person (city_id, active) values (1, 1);
mysql> select * from person;
+----+---------+--------+
| id | city_id | active |
+----+---------+--------+
| 1 | 1 | 1 |
+----+---------+--------+
mysql> update city set active = 0 where id = 1;
mysql> select * from person;
+----+---------+--------+
| id | city_id | active |
+----+---------+--------+
| 1 | 1 | 0 |
+----+---------+--------+
Tested on MySQL 5.5.31.
Maybe you should reconsider how you define a person to be active.. Instead of having active defined twice, you should just keep it in the city table and have your SELECT statements return Person WHERE city.active = 1..
But if you must.. you could do something like:
UPDATE city C
LEFT JOIN person P ON C.id = P.city
SET C.active = 0 AND P.active = 0
WHERE C.id = #id
Related
I want to know if there is a way to get the ID of records updated with ON DUPLICATE KEY UDATE.
For example, I have the users table with the following schema:
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
`username` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idx-users-email` (`email`)
);
and insert some users:
INSERT INTO users (email, username) VALUES ("pioz#example.org", "pioz"),("luke#example.org", "luke"),("mike#example.org", "mike");
the result is:
+----+------------------+----------+
| id | email | username |
+----+------------------+----------+
| 1 | pioz#example.org | pioz |
| 2 | luke#example.org | luke |
| 3 | mike#example.org | mike |
+----+------------------+----------+
Now I want to know if, with a query like the following one, is possible to get the ID of the updated records:
INSERT INTO users (email, username) VALUES ("luke#example.org", "luke2"),("mike#example.org", "mike2") ON DUPLICATE KEY UPDATE username=VALUES(username);
In this example ID 2 and 3.
It seems that the only solution is to used a stored procedure. Here is an example for one row, which could be expanded.
See dbFiddle link below for schema and testing.
CREATE PROCEDURE add_update_user(IN e_mail VARCHAR(25), IN user_name VARCHAR(25) )
BEGIN
DECLARE maxB4 INT DEFAULT 0;
DECLARE current INT DEFAULT 0;
SELECT MAX(ID) INTO maxB4 FROM users;
INSERT INTO users (email, username) VALUES
(e_mail, user_name)
ON DUPLICATE KEY UPDATE username=VALUES(username);
SELECT ID INTO current FROM users WHERE email =e_mail;
SELECT CASE WHEN maxB4 < current THEN CONCAT('New user with ID ', current, ' created')
ELSE CONCAT('User with ID ', current, ' updated') END Report;
/*SELECT CASE WHEn maxB4 < current THEN 1 ELSE 0 END;*/
END
call add_update_user('jake#example.com','Jake');
| Report |
| :------------------------- |
| New user with ID 6 created |
call add_update_user('jake#example.com','Jason');
| Report |
| :--------------------- |
| User with ID 6 updated |
db<>fiddle here
Plan A: Use the technique in the ref manual -- see LAST_INSERT_ID()
Plan B: Get rid of id and make email the PRIMARY KEY
I want to write a sql script, which inserts into table some value for all the ids in some other table.
create table person
(
id int(11) not null auto_increment,
name varchar(255),
primary key (id)
);
insert into person
values (null, 'John'), (null, 'Sam');
select * from person;
id | name
----------
1 | John
2 | Sam
create table phone_details
(
id int(11) not null auto_increment,
person_id int(11),
phone int(11),
constraint person_ibfk_1 foreign key (person_id) references person (id) on delete no action,
primary key (id)
);
Now, in the phone_details table, I want the following :
id | person_id | phone
----------------------------
1 | 1 | 9999999999
2 | 2 | 9999999999
How do I do that ? Currently, I am using Java to write this one time script, but I think there must be a way of doing this in sql.
You can use INSERT INTO ... SELECT syntax:
INSERT INTO phone_details(person_id,phone)
SELECT id, 99999999
FROM person;
Consider storing phone number as VARCHAR.
SqlFiddleDemo
I have two tables in my database, foo and bar. Multiple foos have one bar, and contain a second column that controls the ordering of foos with the same bar. The values used for ordering should be unique, so the table has a unique constraint:
CREATE TABLE bar (
id int NOT NULL AUTO_INCREMENT,
PRIMARY KEY (id)
);
CREATE TABLE foo (
id INT NOT NULL AUTO_INCREMENT,
bar INT,
`order` INT,
PRIMARY KEY (id),
FOREIGN KEY (bar) REFERENCES bar (id),
UNIQUE KEY (bar, order)
);
What is the most efficient way to update the ordering of the foos for one bar? For example, if I have this data:
id | bar | order
1 | 1 | 1
2 | 1 | 2
3 | 1 | 3
4 | 1 | 4
And want to reorder them (1, 2, 4, 3), I now have the following queries:
UPDATE foo SET `order` = NULL WHERE id IN (3, 4);
UPDATE foo SET `order` = 3 WHERE id = 4;
UPDATE foo SET `order` = 4 WHERE id = 3;
The first query is necessary to prevent an integrity error for the other updates. Can this be improved?
The only I can think is you should have your update values in a separated table/query to make it more generic and can work with multiple ID
newQuery
ID newOrder
3 4
4 3
You update your order to null before update because the integrity restriction.
UPDATE foo SET `order` = NULL WHERE id IN (SELECT ID FROM newQuery);
Then update with a JOIN
UPDATE foo AS f
INNER JOIN newQuery AS n ON f.id = n.id
SET f.order = n.newOrder
I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;
I have multiple databases with the same structure in which data is sometimes copied across. In order to maintain data integrity I am using two columns as the primary key. One is a database id, which links to a table with info about each database. The other is a table key. It is not unique because it may have multiple rows with this value being the same, but different values in the database_id column.
I am planning on making the two columns into a joint primary key. However I also want to set the table key to auto increment - but based on the database_id column.
EG, With this data:
table_id database_id other_columns
1 1
2 1
3 1
1 2
2 2
If I am adding data that includes the dabase_id of 1 then I want table_id to be automatically set to 4. If the dabase_id is entered as 2 then I want table_id to be automatically set to 3. etc.
What is the best way of achieving this in MySql.
if you are using myisam
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
For MyISAM and BDB tables you can
specify AUTO_INCREMENT on a secondary
column in a multiple-column index. In
this case, the generated value for the
AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE
prefix=given-prefix. This is useful
when you want to put data into ordered
groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
For your example:
mysql> CREATE TABLE mytable (
-> table_id MEDIUMINT NOT NULL AUTO_INCREMENT,
-> database_id MEDIUMINT NOT NULL,
-> other_column CHAR(30) NOT NULL,
-> PRIMARY KEY (database_id,table_id)
-> ) ENGINE=MyISAM;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO mytable (database_id, other_column) VALUES
-> (1,'Foo'),(1,'Bar'),(2,'Baz'),(1,'Bam'),(2,'Zam'),(3,'Zoo');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM mytable ORDER BY database_id,table_id;
+----------+-------------+--------------+
| table_id | database_id | other_column |
+----------+-------------+--------------+
| 1 | 1 | Foo |
| 2 | 1 | Bar |
| 3 | 1 | Bam |
| 1 | 2 | Baz |
| 2 | 2 | Zam |
| 1 | 3 | Zoo |
+----------+-------------+--------------+
6 rows in set (0.00 sec)
Here's one approach when using innodb which will also be very performant due to the clustered composite index - only available with innodb...
http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html
drop table if exists db;
create table db
(
db_id smallint unsigned not null auto_increment primary key,
next_table_id int unsigned not null default 0
)engine=innodb;
drop table if exists tables;
create table tables
(
db_id smallint unsigned not null,
table_id int unsigned not null default 0,
primary key (db_id, table_id) -- composite clustered index
)engine=innodb;
delimiter #
create trigger tables_before_ins_trig before insert on tables
for each row
begin
declare v_id int unsigned default 0;
select next_table_id + 1 into v_id from db where db_id = new.db_id;
set new.table_id = v_id;
update db set next_table_id = v_id where db_id = new.db_id;
end#
delimiter ;
insert into db (next_table_id) values (null),(null),(null);
insert into tables (db_id) values (1),(1),(2),(1),(3),(2);
select * from db;
select * from tables;
you can make the two column primary key unique and the auto-increment key primary.
The solution provided by DTing is excellent and working. But when tried the same in AWS Aurora, it didn't worked and complaining the below error.
Error Code: 1075. Incorrect table definition; there can be only one auto column and it must be defined as a key
Hence suggesting json based solution here.
CREATE TABLE DB_TABLE_XREF (
db VARCHAR(36) NOT NULL,
tables JSON,
PRIMARY KEY (db)
)
Have the first primary key outside, and second primary key inside the json and make second primary key value as auto_incr_sequence.
INSERT INTO `DB_TABLE_XREF`
(`db`, `tables`)
VALUES
('account_db', '{"user_info": 1, "seq" : 1}')
ON DUPLICATE KEY UPDATE `tables` =
JSON_SET(`tables`,
'$."user_info"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1),
'$."seq"',
IFNULL(`tables` -> '$."user_info"', `tables` -> '$."seq"' + 1)
);
And the output is like below
account_db {"user_info" : 1, "user_details" : 2, "seq" : 2}
product_db {"product1" : 1, "product2" : 2, "product3" : 3, "seq" : 3}
If your secondary keys are huge, and afraid of using json, then i would suggest to have stored procedure, to check for MAX(secondary_column) along with lock like below.
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database = db_name AND table = table_name;
IF t_id = 0 THEN
SELECT GET_LOCK(db_name, 10) INTO acq_lock;
-- CALL debug_msg(TRUE, "Acquiring lock");
IF acq_lock = 1 THEN
SELECT table_id INTO t_id FROM DB_TABLE_XREF WHERE database_id = db_name AND table = table_name;
-- double check lock
IF t_id = 0 THEN
SELECT IFNULL((SELECT MAX(table_id) FROM (SELECT table_id FROM DB_TABLE_XREF WHERE database = db_name) AS something), 0) + 1 into t_id;
INSERT INTO DB_TABLE_XREF VALUES (db_name, table_name, t_id);
END IF;
ELSE
-- CALL debug_msg(TRUE, "Failed to acquire lock");
END IF;
COMMIT;