I have this table
CREATE TABLE `pcodes` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`product_id` int(10) unsigned NOT NULL,
`code` varchar(100) NOT NULL,
`used` int(10) unsigned NOT NULL DEFAULT '0',
`created_at` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` datetime DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
)
and an insert command is the following:
INSERT INTO `pcodes` (`product_id`, `code`) VALUES ('1', 'test2');
The table contains random codes for each product_id. I want to get one unused code randomly (LIMIT 1 is ok for the job), mark the code as used and return it to the next layer.
So far I did this:
SELECT * FROM pcodes where product_id=1 and used=0 LIMIT 1
UPDATE pcodes SET used= 1 WHERE (id = 2);
but this does not work well when multiple threads request the first unused code. What is the optimal solution to do this query? I would like to avoid stored procedures.
Possible solution.
Assumes that there aren't other predefined values stored in used column except 0 and 1.
CREATE PROCEDURE select_one_random_row (OUT rowid BIGINT)
BEGIN
REPEAT
UPDATE pcodes SET used = CONNECTION_ID() WHERE used = 0 LIMIT 1;
SELECT id INTO rowid FROM pcodes WHERE used = CONNECTION_ID();
UNTIL rowid IS NOT NULL END REPEAT;
UPDATE pcodes SET used = 1 WHERE used = CONNECTION_ID();
END
To prevent indefinite loop (for example when no rows with used=0) add some counter which increments in REPEAT cycle and breaks it after some reasonable iteration attempts.
The code may be converted to FUNCTION which returns selected rowid.
It is possible that the procedure/function fails (by some external reasons), and a row will stay "selected by current CONNECTION_ID()" whereas the connection is broken itself. So you need in service procedure executed by Event Scheduler which will garbage the rows which belongs to non-existed connections and clear their used value back to zero returning such rows to unused pool.
Related
I am working with mysql and I have a table with the following structure (a summary):
CREATE TABLE `costs` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`utility` DECIMAL(9,2) UNSIGNED NOT NULL,
`tax` DECIMAL(9,2) UNSIGNED NOT NULL,
`active` TINYINT(1) UNSIGNED NOT NULL DEFAULT '1',
`created_at` TIMESTAMP NULL DEFAULT NULL,
`updated_at` TIMESTAMP NULL DEFAULT NULL,
PRIMARY KEY (`id`),
)
where the active field defaults to 1 when inserting, then i would like when saving a new record all other rows update the active field as 0, so i try to create a trigger for this but i am getting a mysql error.
DELIMITER //
CREATE TRIGGER after_costs_insert AFTER INSERT ON costs FOR EACH ROW
BEGIN
UPDATE costs SET active = 0 WHERE id <> NEW.id;
END;
//
DELIMITER ;
I think it is not possible to do this, so how can I update these rows?
A trigger cannot action the table it was fired upon. That's a typical limitation in SQL, that is mainly meant to prevent infinite loop on invokation (a query fires a trigger, that executes a query, that fires the trigger, and so on).
Here, instead of actually storing this derived information, I would actually recommend using a view that computes the column on the fly when queried.
If you are running MySQL 8.0:
create view costs_view as
select
id,
utility,
tax,
row_number() over(order by id desc) = 1 active,
created_at,
updated_at
from costs
In earlier versions:
create view costs_view as
select
id,
utility,
tax,
id = (select max(id) from costs) active,
created_at,
updated_at
from costs
This gives you an always up-to-date column, that you just don't need to maintain.
If you want only the most recent row, then you can use:
select c.*
from costs c
order by id desc -- or created_at
limit 1;
This will work in a view.
More often, the situation is that you have one active per something -- such as a "utility" or whatever. In that case, you can use a secondary table to store one row per "something" along with the id of the active account. The trigger can set this idea.
In your case, you have only one active row in the costs table, so a secondary table might be considered overkill. You can easily get the current active value using the above query.
I have a large table named 'roomlogs' which has nearly 1 million entries.
The structure of the table:
id --> PK
roomId --> varchar FK to rooms table
userId --> varchar FK to users table
enterTime --> Date and Time
exitTime --> Date and Time
status --> bool
I have the previous indexing on roomID, I recently added an index on the userId column.
So, When I run a stored procedure with following code it is taking more time like on average 50 seconds. WHich it should not take.
DELIMITER ;;
CREATE DEFINER=`root`#`%` PROCEDURE `enter_room`(IN pRoomId varchar(200), IN puserId varchar(50), IN ptime datetime, IN phidden int, pcheckid int, pexit datetime)
begin
update roomlogs set
roomlogs.exitTime = ptime,
roomlogs.`status` = 1
where
roomlogs.userId = puserId
and roomlogs.`status` = 0
and DATEDIFF(ptime,roomlogs.enterTime) = 0;
INSERT into roomlogs
( roomlogs.roomId,
roomlogs.userId,
roomlogs.enterTime,
roomlogs.exitTime,
roomlogs.hidden,
roomlogs.checkinId )
value
( pRoomId,
userId,
ptime,
pexit,
phidden,
pcheckid);
select *
from
roomlogs
where
roomlogs.id= LAST_INSERT_ID();
end ;;
DELIMITER ;
What Can be the reason for it to take this much time:
I added an index recently so previous rows are not indexed.
There is no selection on storage type for any indexes right now. Should I change it to B-tree?
On my website, I get 20-30 simultaneous call on other procedures also while this procedure has 10-20 simultaneous calls, does the update query in the procedure make a lock? But in MySQL.slow_logs table for each query the lock _time shows 0.
Is there any other reason for this behaviour?
Edit: Here is the SHOW TABLE:
CREATE TABLE `roomlogs` (
`roomId` varchar(200) CHARACTER SET latin1 DEFAULT NULL,
`userID` varchar(50) CHARACTER SET latin1 DEFAULT NULL,
`enterTime` datetime DEFAULT NULL,
`exitTime` datetime DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
`status` int(11) DEFAULT '0',
`hidden` int(11) DEFAULT '0',
`checkinId` int(11) DEFAULT '-1',
PRIMARY KEY (`id`),
KEY `RoomLogIndex` (`roomId`),
KEY `RoomLogIDIndex` (`id`),
KEY `USERID` (`userID`)
) ENGINE=InnoDB AUTO_INCREMENT=1064216 DEFAULT CHARSET=utf8
I can also see that this query is running more number of times like 100000 times per day (nearly continuously).
SELECT count(*) from roomlogs where roomId=proomId and status='0';
Because of this query reads from the same table, does InnoDB block or create a lock on update query because I can see that when the above-stored procedure is running more number of times then this query is taking more time.
Here is the link for MySQL variables: https://docs.google.com/document/d/17_MVaU4yvpQfVDT83yhSjkLHsgYd-z2mg6X7GwvYZGE/edit?usp=sharing
roomlogs needs this 'composite' index:
INDEX(userId, `status`, enterTime)
I added an index recently so previous rows are not indexed.
Not true. Adding an INDEX indexes the entire table.
The default index type is BTree; no need to explicitly specify it.
does the update query in the procedure make a lock?
It does some form of locking. What is the value of autocommit? Do you explicitly use BEGIN and COMMIT? Is the table ENGINE=InnoDB? Please provide SHOW CREATE TABLE.
MySQL.slow_logs table for each query the lock _time shows 0.
The INSERT you show seems to be inserting the same row as the UPDATE. Maybe you need INSERT ... ON DUPLICATE KEY UPDATE ...?
Don't "hide an index column in a function"; instead of DATEDIFF(roomlogs.enterTime,NOW()) = 0, do
AND enterTime >= CURDATE()
AND enterTime < CURDATE() + INTERVAL 1 DAY
This allows the index to be used more fully.
KEY `RoomLogIndex` (`roomId`), Change to (roomId, status)
KEY `RoomLogIDIndex` (`id`), Remove, redundant with the PK
Buffer pool in only 97,517,568 -- make it more like 9G.
We are using a table which has schema like following:-
CREATE TABLE `user_subscription` (
`ID` varchar(40) NOT NULL,
`COL1` varchar(40) NOT NULL,
`COL2` varchar(30) NOT NULL,
`COL3` datetime NOT NULL,
`COL4` datetime NOT NULL,
`ARCHIVE` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`ID`)
)
Now we wanted to do partition on column ARCHIVE. ARCHIVE can have only 2 values 0 or 1 and so 2 partitions.
Actually in our case, we are using partitioning as a Archival process. To do partition, we need to make ARCHIVE column as a part of primary key. But the problem here is that 2 rows can have same ID with different ARCHIVE column value. Actually thats not the main problem for us as 2 rows will be in different partitions. Problem is when we will update the archive column value of one of them to other to move one of the row to archive partition, then it will not allow us to update the entry giving "Duplicate Error".
Can somebody help in this regard?
Unfortunately,
A UNIQUE INDEX (or a PRIMARY KEY) must include all columns in the table's partitioning function
and since MySQL does not support check constraints either, the only ugly workaround I can think of is enforcing the uniqueness manually though triggers:
CREATE TABLE t (
id INT NOT NULL,
archived TINYINT(1) NOT NULL DEFAULT 0,
PRIMARY KEY (id, archived), -- required by MySQL limitation on partitioning
)
PARTITION BY LIST(archived) (
PARTITION pActive VALUES IN (0),
PARTITION pArchived VALUES IN (1)
);
CREATE TRIGGER tInsert
BEFORE INSERT ON t FOR EACH ROW
CALL checkUnique(NEW.id);
CREATE TRIGGER tUpdate
BEFORE UPDATE ON t FOR EACH ROW
CALL checkUnique(NEW.id);
DELIMITER //
CREATE PROCEDURE checkUnique(pId INT)
BEGIN
DECLARE flag INT;
DECLARE message VARCHAR(50);
SELECT id INTO flag FROM t WHERE id = pId;
IF flag IS NOT NULL THEN
-- the below tries to mimic the error raised
-- by a regular UNIQUE constraint violation
SET message = CONCAT("Duplicate entry '", pId, "'");
SIGNAL SQLSTATE "23000" SET
MYSQL_ERRNO = 1062,
MESSAGE_TEXT = message,
COLUMN_NAME = "id";
END IF;
END //
(fiddle)
MySQL's limitations on partitioning being such a downer (in particular its lack of support for foreign keys), I would advise against using it altogether until the table grows so large that it becomes an actual concern.
I am trying to figure out make a trigger to assign the value of the auto incremented 'ID' primary key field that is auto generated upon insert to another field 'Sort_Placement' so they are the same after insert.
If you are wondering why I am doing this, 'Sort_Placement' is used as a sort value in a table that can be changed but by default the record is added to the bottom of the table
Table Data
`ID` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`Account_Num` mediumint(8) unsigned NOT NULL,
`Product_Num` mediumint(8) unsigned NOT NULL,
`Sort_Placement` mediumint(8) unsigned DEFAULT NULL,
`Order_Qty_C` smallint(6) NOT NULL DEFAULT '0',
`Order_Qty_B` smallint(6) NOT NULL DEFAULT '0',
`Discount` decimal(6,2) NOT NULL DEFAULT '0.00',
PRIMARY KEY (`ID`),
UNIQUE KEY `ID_UNIQUE` (`ID`)
After Insert Trigger
CREATE
TRIGGER `order_guide_insert_trigger`
AFTER INSERT ON `order_guide`
FOR EACH ROW
BEGIN
IF Sort_Placement IS NULL THEN
SET Sort_Placement = NEW.ID;
END IF;
END;
I have tried a bunch of combinations of using the "NEW" prefix with no luck. For example putting the NEW prefix before each field name.
Trying it out
INSERT INTO `order_guide` (`Account_Num`, `Product_Num`) VALUES ('5966', '3');
Insert Error
ERROR 1054: Unknown column 'Sort_Placement' in 'field list'
This seems like a bit of a hack job but I was able to get it working using the LAST_INSERT_ID() function built into MySQL.
CREATE TRIGGER `order_guide_insert_trigger`
BEFORE INSERT ON `order_guide`
FOR EACH ROW
BEGIN
IF NEW.Sort_Placement IS NULL THEN
SET NEW.Sort_Placement = LAST_INSERT_ID() + 1;
END IF;
END;
This also works and seems to work
CREATE TRIGGER `order_guide_insert_trigger`
BEFORE INSERT ON `order_guide`
FOR EACH ROW
BEGIN
IF NEW.Sort_Placement IS NULL THEN
SET NEW.Sort_Placement = (SELECT ID FROM order_Guide ORDER BY id DESC LIMIT 1) + 1;
END IF;
END;
I ran into a similar (yet different) requirement, where a field value in the table needed to be based on the new record's Auto Increment ID. I found two solutions that worked for me.
The first option was to use an event timer that runs every 60 seconds. The event updated the records where my field was set to the default of null. Not a bad solution if you don't mind the up to 60 second delay (you could run it every 1 second if the field that is being update is indexed). Basically the event does this:
CREATE EVENT `evt_fixerupper`
ON SCHEDULE EVERY 1 MINUTE
ENABLE
COMMENT '' DO
BEGIN
UPDATE table_a SET table_a.other_field=CONCAT(table_a.id,'-kittens')
WHERE ISNULL(table_a.other_field);
END;
The other option was to generate my own unique primary IDs (rather than relying upon AUTOINCREMENT. In this case I used a function (in my application) modeled after the perl module https://metacpan.org/pod/Data::Uniqid. the generated ID's are huge in length, but they work well, and I know the value before I insert, so I can use it to generate values for additional fields.
I have a table called promotion_codes
CREATE TABLE promotion_codes (
id int(10) UNSIGNED NOT NULL auto_increment,
created_at datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
code varchar(255) NOT NULL,
order_id int(10) UNSIGNED NULL DEFAULT NULL,
allocated_at datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This table is pre-populated with available codes that will be assigned to orders that meet a specific criteria.
What I need to ensure is that after the ORDER is created, that I obtain an available promotion code and update its record to reflect that it has been allocated.
I am not 100% sure how to not grab the same record twice if simultaneous requests come in.
I have tried locking the row during a select and locking the row during a update - both still seem to allow a second (simultaneous) attempt to grab the same record - which is what I want to avoid
UPDATE promotion_code
SET allocated_at = "' . $db_now . '", order_id = ' . $donation->id . '
WHERE order_id IS NULL LIMIT 1
You can add a second table which holds all used codes. So you can use an unique constraint in the assignment table to make sure that one code is not assigned twice.
CREATE TABLE `used_codes` (`usage` INTEGER PRIMARY KEY auto_increment,
`id` INTEGER NOT NULL UNIQ, -- This makes sure, that there are no two assignments of one code
allocated_at datetime NOT NULL);
You add the ID of an used code into the used_codes table, and query which code you used afterwards. When this two operations are in one transaction, the entire transaction will fail when there is a second try to use the same code.
I did not test the following code, you might to adjust it.
Also you need to make sure that you have your server meets the requirements for transactions.
-- There are changes which have to be atomic, so don't use autocommit
SET autocommit = 0;
BEGIN TRANSACTION
INSERT INTO `used_codes` (`id`, `allocated_at`) VALUES
(SELECT `id` FROM `promotion_codes`
WHERE NOT `id` in (SELECT `id` FROM `used_codes`)
LIMIT 1), now());
SELECT `code` FROM `promotion_codes` WHERE `id` =
-- You might need to adjust the extraction of insertion ID, since
-- I don't know if parallel running transactions can see the maximum
-- their maximum IDs. But there should be a way to extract the last assigned
-- ID within this transaction.
(SELECT `id` FROM `used_codes` HAVING `usage` = max(`usage`));
COMMIT
You can use the returned code if the transaction sucseeded. If there where more than one processes running to use the same code, only one of them succed, while the rest fails with insert errors about the duplicated row. In your software you need to distinguish between the duplicated row error and other errors, and reexecute the statement on duplication errors.