So I have two tables: posts and server_options.
posts consists of:
no
date_created
server_id
user_id
etc
server_options consists of:
no
posts_per_user
server_id
etc...
A user is only allowed server_options.posts_per_user posts per server. To limit this, I have a trigger executed before insert and update to posts:
CREATE DEFINER=`root`#`localhost` TRIGGER `bi_posts` BEFORE INSERT ON `posts` FOR EACH ROW BEGIN
SELECT posts_per_user INTO #posts_per_user FROM `server_options` WHERE server_id = NEW.server_id LIMIT 1;
SELECT COUNT(0) INTO #post_count FROM `posts` WHERE server_id = NEW.server_id AND user_id = NEW.user_id;
IF #post_count >= #posts_per_user
THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Cannot add or update row: post limit exceeded.';
END IF;
END
Simple enough. However, what I'd like to do is also add a trigger to server_options on update. If posts_per_user changes, it should remove any "excess" posts for that server from posts.
For example. If some users have 5 posts in a server, and the posts_per_user is reduced to 3, it should delete the oldest and keep only 3 for that user for that server.
Any pointers on where to begin? no is AI PK, so we can sort by that rather than date_added to make things easier.
I was thinking this (it works), but to me it seems like a bad approach:
CREATE DEFINER=`root`#`localhost` TRIGGER `au_server_options` AFTER UPDATE ON `server_options` FOR EACH ROW BEGIN
IF NEW.posts_per_user < OLD.posts_per_user
THEN
DELETE FROM `posts`
WHERE NEW.posts_per_user <= (
SELECT count(0)
FROM (SELECT no, server_id, user_id FROM posts) AS posts_temp
WHERE
`posts`.`server_id` = `posts_temp`.`server_id` AND
`posts`.`user_id` = `posts_temp`.`user_id` AND
`posts`.`no` > `posts_temp`.`no`
ORDER BY `posts`.`no`
);
END IF;
END
I'm not looking at huge amounts of entries. Maybe a 100,000 or so.
Thanks, y'all!
Related
Assuming MySQL with a users table like
id | user_name | total_likes | updated_at
and a likes table like
id | user_id | like
What I need is to when the likes table gets updated/inserted/deleted then the users.updated_at associated with likes.user_id gets updated to the current date. Further when an insert happens to the likes table, then users.total_likes increases by 1 value.
I can do all the above with queries however, I am trying to use the power of relationships in MySQL. Please can you advise?
As David mentioned above, you are most likely looking for triggers. Something like this:
Delimiter //
CREATE TRIGGER ai_likes AFTER INSERT ON likes
FOR EACH ROW
BEGIN
INSERT INTO users (updated_at) VALUES
NOW()
WHERE id = NEW.id;
UPDATE users SET total_likes = (total_likes + 1)
WHERE id = NEW.id;
END//
Delimiter ;
CREATE TRIGGER au_likes AFTER UPDATE ON likes
FOR EACH ROW
INSERT INTO users (updated_at) VALUES
NOW()
WHERE id = NEW.id;
CREATE TRIGGER ad_likes AFTER DELETE ON likes
FOR EACH ROW
INSERT INTO users (updated_at) VALUES
NOW()
WHERE id = NEW.id;
There are two tables: orders, orders_history.
orders
________
id | status
orders_history
id | order_id | status | user_id
The orders_history contains history of all user's actions. At the same time orders.status contains the last status from orders_history.status.
I make these queries in transation:
transaction start
insert into orders_history...
$status = select status from order_history order by id desc limit 1;
update orders set status = $status where orders.id = id
My question is:
Should I use transaction and is it properly way to do that?
What if several transactions try to insert, update orders_history for the same order_id.
As suggested in the comments above you could use a trigger to update the orders table -
DELIMITER $$
CREATE TRIGGER `update_order_status` AFTER INSERT ON `orders_history`
FOR EACH ROW
UPDATE `orders` SET `status` = NEW.status WHERE id = NEW.order_id;
$$
DELIMITER ;
The better option would be to not store the redundant status in orders and just query for most recent status in orders_history.
SELECT orders.id, (SELECT status FROM orders_history oh WHERE orders.id = oh.order_id ORDER BY id DESC LIMIT 1) AS status
FROM orders
The design pattern I might use in this case is...
Table 1: History -- this is an audit trail of everything that has gone on. (Think: All the checks written and deposits made to a checking account.)
Table 2: Current -- this is the current status of the information. (Think, current account balance, status, etc.)
Whenever something happens (eg, a check clears):
START TRANSACTION;
INSERT INTO History ...;
UPDATE Current ...;
COMMIT;
In the case of a checking account, something different is needed if your account is overdrawn, so let's make the transaction more complex:
START TRANSACTION;
SELECT balance FROM Current WHERE acct = 123 FOR UPDATE;
if would be overdrawn then
email user
UPDATE Current SET status = 'overdrawn' acct = 123;
...
else
INSERT INTO History ...;
UPDATE Current ...;
endif
COMMIT;
I prefer to put the "business logic" clearly in one place, not hidden in a Trigger. (I might use a Trigger for monitoring or logging, but not for the main purpose of the tables.)
I have this table seller whose columns are
id mobile1
1 787811
I have another table with same columns ,I just want to update the mobile1 field from this table with the values from other table say "copy".
I have written this query
UPDATE seller
SET mobile1 = (
SELECT SUBSTRING_INDEX(mobile1, '.', 1)
FROM copy)
WHERE 1;
I am getting this obvious error when I run it.
Sub-query returns more than 1 row ,
Any way to do this??
You need condition which will be using to select only one row or you should use LIMIT:
UPDATE seller
SET mobile1 = (
SELECT SUBSTRING_INDEX(mobile1, '.', 1)
FROM copy
LIMIT 1)
WHERE id = 1;
You can constrain the number of rows returned to just one using MySQL limit.
UPDATE seller SET mobile1=(SELECT SUBSTRING_INDEX(mobile1,'.',1)
FROM copy LIMIT 1)
WHERE id=1;
If anyone who is looking for the possible answer here is what I did,I created a procedure with while loop.
DELIMITER $$
CREATE PROCEDURE update_mobile(IN counting BIGINT);
BEGIN
declare x INT default 0;
SET x = 1;
WHILE x <= counting DO
UPDATE copy SET mobile1=(SELECT SUBSTRING_INDEX(mobile1, '.', 1) as mobi FROM seller WHERE id=x LIMIT 1) WHERE id=x;
SET x=x + 1;
END WHILE;
END
AND finally I calculated the number of rows by count(id) and passed this number to my procedure
SET #var =count;
CALL update_mobile(#var);
AND it worked like a Charm...
If you want to copy all data, you can do this :
INSERT INTO `seller` (`mobile1`) SELECT SUBSTRING_INDEX(mobile1,'.',1) FROM copy
Assume I've got an users table with 1M users on MySQL/InnoDB:
users
userId (Primary Key, Int)
status (Int)
more data
If I would want to have an exact count of the amount of users with status = 1 (denoting an activate account), what would be the way to go for big tables, I was thinking along the lines of:
usercounts
status
count
And then run an TRIGGER AFTER INSERT on users that updates the appropiate columns in usercounts
Would this be the best way to go?
ps. An extra small question: Since you also need an TRIGGER AFTER UPDATE on users for when status changes, is there a syntax available that:
Covers both the TRIGGER AFTER INSERT and TRIGGER AFTER UPDATE on status?
Increments the count by one if a count already is present, else inserts a new (status, count = 0) pair?
Would this be the best way to go?
Best (opinion-based) or not but it's definitely a possible way to go.
is there a syntax available that: covers both the TRIGGER AFTER INSERT and TRIGGER AFTER UPDATE on status?
No. There isn't a compound trigger syntax in MySQL. You'll have to create separate triggers.
is there a syntax available that: increments the count by one if a count already is present, else inserts a new (status, count = 0) pair?
Yes. You can use ON DUPLICATE KEY clause in INSERT statement. Make sure that status is a PK in usercounts table.
Now if users can be deleted even if only for maintenance purposes you also need to cover it with AFTER DELETE trigger.
That being said your triggers might look something like
CREATE TRIGGER tg_ai_users
AFTER INSERT ON users
FOR EACH ROW
INSERT INTO usercounts (status, cnt)
VALUES (NEW.status, 1)
ON DUPLICATE KEY UPDATE cnt = cnt + 1;
CREATE TRIGGER tg_ad_users
AFTER DELETE ON users
FOR EACH ROW
UPDATE usercounts
SET cnt = cnt - 1
WHERE status = OLD.status;
DELIMITER $$
CREATE TRIGGER tg_au_users
AFTER UPDATE ON users
FOR EACH ROW
BEGIN
IF NOT NEW.status <=> OLD.status THEN -- proceed ONLY if status has been changed
UPDATE usercounts
SET cnt = cnt - 1
WHERE status = OLD.status;
INSERT INTO usercounts (status, cnt) VALUES (NEW.status, 1)
ON DUPLICATE KEY UPDATE cnt = cnt + 1;
END IF;
END$$
DELIMITER ;
To initially populate usercounts table use
INSERT INTO usercounts (status, cnt)
SELECT status, COUNT(*)
FROM users
GROUP BY status
Here is SQLFiddle demo
I think there are simpler options available to you.
Just add an index to the field you'd like to count on.
ALTER TABLE users ADD KEY (status);
Now a select should be very fast.
SELECT COUNT(*) FROM users WHERE status = 1
We have a system that has a database based queue for processing items in threads instead of real time. It's currently implemented in Mybatis calling a this stored procedure in mysql:
DROP PROCEDURE IF EXISTS pop_invoice_queue;
DELIMITER ;;
CREATE PROCEDURE pop_invoice_queue(IN compId int(11), IN limitRet int(11)) BEGIN
SELECT LAST_INSERT_ID(id) as value, InvoiceQueue.* FROM InvoiceQueue
WHERE companyid = compId
AND (lastPopDate is null OR lastPopDate < DATE_SUB(NOW(), INTERVAL 3 MINUTE)) LIMIT limitRet FOR UPDATE;
UPDATE InvoiceQueue SET lastPopDate=NOW() WHERE id=LAST_INSERT_ID();
END;;
DELIMITER ;
The problem is that this pops N items from the queue but only updates the lastPopDate value for the last item popped off the queue. So if we call this stored procedure with limitRet = 5, it will pop five items off the queue and start working on them but only the fifth item will have a lastPopDate set so when the next thread comes and pops off the queue it will get items 1-4 and item 6.
How can we get this to update all N records 'popped' off the database?
If you are willing to add a BIGINT field to the table via:
ALTER TABLE InvoiceQueue
ADD uuid BIGINT NULL DEFAULT NULL,
INDEX ix_uuid (uuid);
then you can do the update first, and select the records updated, via:
CREATE PROCEDURE pop_invoice_queue(IN compId int(11), IN limitRet int(11))
BEGIN
SET #uuid = UUID_SHORT();
UPDATE InvoiceQueue
SET uuid = #uuid,
lastPopDate = NOW()
WHERE companyid = compId
AND uuid IS NULL
AND (lastPopDate IS NULL OR lastPopDate < NOW() - INTERVAL 3 MINUTE)
ORDER BY
id
LIMIT limitRet;
SELECT *
FROM InvoiceQueue
WHERE uuid = #uuid
FOR UPDATE;
END;;
For the UUID_SHORT() function to return unique values, it should be called no more than 16 million times a second per machine. Visit here for more details.
For performance, you may want to alter the lastPopDate field to be NOT NULL as the OR clause will cause your query to not use an index, even if one is available:
ALTER TABLE InvoiceQueue
MODIFY lastPopDate DATETIME NOT NULL DEFAULT '0000-00-00';
Then, if you do not already have one, you could add an index on the companyid/lastPopDate/uuid fields, as follows:
ALTER TABLE InvoiceQueue
ADD INDEX ix_company_lastpop (companyid, lastPopDate, uuid);
Then you can remove the OR clause from your UPDATE query:
UPDATE InvoiceQueue
SET uuid = #uuid,
lastPopDate = NOW()
WHERE companyid = compId
AND lastPopDate < NOW() - INTERVAL 3 MINUTE
ORDER BY
id
LIMIT limitRet;
which will use the index you just created.
Since mysql has neither collection nor output/returning clause, my suggestion is to use temporary tables. Something like :
CREATE TEMPORARY TABLE temp_data
SELECT LAST_INSERT_ID(id) as value, InvoiceQueue.* FROM InvoiceQueue
WHERE companyid = compId
AND (lastPopDate is null OR lastPopDate < DATE_SUB(NOW(), INTERVAL 3 MINUTE)) LIMIT limitRet FOR UPDATE;
UPDATE InvoiceQueue
INNER JOIN temp_data ON (InvoiceQueue.PKColumn = temp_data.PKColumn)
SET lastPopDate=NOW();
SELECT * FROM temp_data ;
DROP TEMPORARY TABLE temp_data;
Also, I surmise such select ... for update can cause deadlocks (surely, if the procedure is called from different sessions) - as far as I know order in which rows get locked is not guaranteed (even if you had order by, rows might be locked in different order). I'd recommend to double check documentation.