Consider this table:
CREATE TABLE `Alarms` (
`AlarmId` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`DeviceId` BINARY(16) NOT NULL,
`Code` BIGINT(20) UNSIGNED NOT NULL,
`Ended` TINYINT(1) NOT NULL DEFAULT '0',
`NaturalEnd` TINYINT(1) NOT NULL DEFAULT '0',
`Pinned` TINYINT(1) NOT NULL DEFAULT '0',
`Acknowledged` TINYINT(1) NOT NULL DEFAULT '0',
`StartedAt` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',
`EndedAt` TIMESTAMP NULL DEFAULT NULL,
`MarkedForDeletion` TINYINT(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`AlarmId`),
KEY `Key1` (`Ended`,`Acknowledged`),
KEY `Key2` (`Pinned`),
KEY `Key3` (`DeviceId`,`Pinned`),
KEY `Key4` (`DeviceId`,`StartedAt`,`EndedAt`),
KEY `Key5` (`DeviceId`,`Ended`,`EndedAt`),
KEY `Key6` (`MarkedForDeletion`)
) ENGINE=INNODB;
And, for this test, populate it like so:
-- Populate some dummy data; 500 alarms for each
-- of 1000 one-second periods
SET #testDevice = UNHEX('00030000000000000000000000000000');
DROP PROCEDURE IF EXISTS `injectAlarms`;
DELIMITER ;;
CREATE PROCEDURE injectAlarms()
BEGIN
SET #fromdate = '2018-02-18 00:00:00';
SET #numdates = 1000;
SET #todate = DATE_ADD(#fromdate, INTERVAL #numdates SECOND);
-- Create table of alarm codes to join on
DROP TABLE IF EXISTS `__codes`;
CREATE TEMPORARY TABLE `__codes` (
`Code` BIGINT NOT NULL PRIMARY KEY
);
SET #startcode = 0;
SET #endcode = 499;
REPEAT
INSERT INTO `__codes` VALUES(#startcode);
SET #startcode = #startcode + 1;
UNTIL #startcode > #endcode END REPEAT;
-- Add an alarm for each code, for each second in range
REPEAT
INSERT INTO `Alarms`
(`DeviceId`, `Code`, `Ended`, `NaturalEnd`, `Pinned`, `Acknowledged`, `StartedAt`, `EndedAt`)
SELECT
#testDevice,
`Code`,
TRUE, FALSE, FALSE, FALSE,
#fromdate, #fromdate
FROM `__codes`;
SET #fromdate = DATE_ADD(#fromdate, INTERVAL 1 SECOND);
UNTIL #fromdate > #todate END REPEAT;
END;;
DELIMITER ;
CALL injectAlarms();
Now, for some datasets the following query works quite well:
SELECT * FROM `Alarms`
WHERE
((`Alarms`.`Ended` = FALSE AND `Alarms`.`Acknowledged` = FALSE) OR `Alarms`.`Pinned` = TRUE) AND
`MarkedForDeletion` = FALSE AND
`DeviceId` = #testDevice
;
This is because MariaDB is clever enough to use index merges, e.g.:
id select_type table type possible_keys
1 SIMPLE Alarms index_merge Key1,Key2,Key3,Key4,Key5,Key6
key key_len ref rows Extra
Key1,Key2,Key3 2,1,17 (NULL) 2 Using union(Key1,intersect(Key2,Key3)); Using where
However if I use the dataset as populated by the procedure above, and flip the query around a bit (which is another view I need, but in this case will return many more rows):
SELECT * FROM `Alarms`
WHERE
((`Alarms`.`Ended` = TRUE OR `Alarms`.`Acknowledged` = TRUE) AND `Alarms`.`Pinned` = FALSE) AND
`MarkedForDeletion` = FALSE AND
`DeviceId` = #testDevice
;
… it doesn't:
id select_type table type possible_keys
1 SIMPLE Alarms ref Key1,Key2,Key3,Key4,Key5,Key6
key key_len ref rows Extra
Key2 1 const 144706 Using where
I would rather like the index merges to happen more often. As it is, given the ref=const, this query plan doesn't look too scary … however, the query takes almost a second to run. That in itself isn't the end of the world, but the poorly-scaling nature of my design shows when trying a more exotic query, which takes a very long time:
-- Create a temporary table that we'll join against in a mo
DROP TABLE IF EXISTS `_ranges`;
CREATE TEMPORARY TABLE `_ranges` (
`Start` TIMESTAMP NOT NULL DEFAULT 0,
`End` TIMESTAMP NOT NULL DEFAULT 0,
PRIMARY KEY(`Start`, `End`)
);
-- Populate it (in reality this is performed by my application layer)
SET #endtime = 1518992216;
SET #starttime = #endtime - 86400;
SET #inter = 900;
DROP PROCEDURE IF EXISTS `populateRanges`;
DELIMITER ;;
CREATE PROCEDURE populateRanges()
BEGIN
REPEAT
INSERT IGNORE INTO `_ranges` VALUES(FROM_UNIXTIME(#starttime),FROM_UNIXTIME(#starttime + #inter));
SET #starttime = #starttime + #inter;
UNTIL #starttime > #endtime END REPEAT;
END;;
DELIMITER ;
CALL populateRanges();
-- Actual query
SELECT UNIX_TIMESTAMP(`_ranges`.`Start`) AS `Start_TS`,
COUNT(`Alarms`.`AlarmId`) AS `n`
FROM `_ranges`
LEFT JOIN `Alarms`
ON `Alarms`.`StartedAt` < `_ranges`.`End`
AND (`Alarms`.`EndedAt` IS NULL OR `Alarms`.`EndedAt` >= `_ranges`.`Start`)
AND ((`Alarms`.`EndedAt` IS NULL AND `Alarms`.`Acknowledged` = FALSE) OR `Alarms`.`Pinned` = TRUE)
-- Again, the above condition is sometimes replaced by:
-- AND ((`Alarms`.`EndedAt` IS NOT NULL OR `Alarms`.`Acknowledged` = TRUE) AND `Alarms`.`Pinned` = FALSE)
AND `DeviceId` = #testDevice
AND `MarkedForDeletion` = FALSE
GROUP BY `_ranges`.`Start`
(This query is supposed to gather a list of counts per time slice, each count indicating how many alarms' [StartedAt,EndedAt] range intersects that time slice. The result populates a line graph.)
Again, when I designed these tables and there weren't many rows in them, index merges seemed to make everything whiz along. But now not so: with the dataset as given in injectAlarms(), this takes 40 seconds to complete!
I noticed this when adding the MarkedForDeletion column and performing some of my first large-dataset scale tests. This is why my choice of indexes doesn't make a big deal out of the presence of MarkedForDeletion, though the results described above are the same if I remove AND MarkedForDeletion = FALSE from my queries; however, I've kept the condition in, as ultimately I will need it to be there.
I've tried a few USE INDEX/FORCE INDEX combinations, but it never seems to use index merge as a result.
What indexes can I define to make this table behave quickly in the given cases? Or how can I restructure my queries to achieve the same goal?
(Above query plans obtained on MariaDB 5.5.56/CentOS 7, but solution must also work on MySQL 5.1.73/CentOS 6.)
Wow! That's the most complicated "index merge" I have seen.
Usually (perhaps always), you can make a 'composite' index to replace an index-merge-intersect, and perform better. Change key2 from just (pinned) to (pinned, DeviceId). This may get rid of the 'intersect' and speed it up.
In general, the Optimizer uses index merge only in desperation. (I think this is the answer to the title question.) Any slight changes to the query or the values involved, and the Optimizer will perform the query without index merge.
An improvement on the temp table __codes is to build a permanent table with a large range of values, then use a range of values from that table inside your Proc. If you are using MariaDB, then use the dynamically built "sequence" table. For example the 'table' seq_1_to_100 is effectively a table of one column with numbers 1..100. No need to declare it or populate it.
You can get rid of the other REPEAT loop by computing the time from Code.
Avoiding LOOPs will be the biggest performance benefit.
Get all that done, then I may have other tips.
Related
Firstly,I made a MYSQL Procedure to increase a number[+1 every time](there is no TRANSACTION in it), and I called the Procedure n(n>1) times in a Spring Transaction and got a same number, and the number +1 finally(+n expected)
Secondly, I added TRANSACTION in Procedure and commit atfer +1, and got the same result as above;
Thirdly, I added #Transaction (rollbackFor = Exception.class, propagation = Propagation.REQUIRES_NEW) on the Method A(A calls Procedure), and call Method A serveral times in Method B, which is annotationed by #Transactional, then I got the same result as above;
Anyone help? can you give me a way to handle it?
Plus:
the table in MySQL
CREATE TABLE `SEQUENCE` (
`ID` bigint(10) NOT NULL ,
`COUNT` int(11) NOT NULL ,
`CUR_DATE` date NOT NULL ,
`READ_ME` varchar(20) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL ,
PRIMARY KEY (`ID`)
);
the Procedure in MySQL
CREATE DEFINER="root"#"%" PROCEDURE "SEQUENCE_PROCEDURE"(IN _id bigint)
BEGIN
UPDATE `SEQUENCE` SET `COUNT`=-1,CUR_DATE=now() where `ID`=_id and TIMESTAMPDIFF(DAY,CUR_DATE,now())>0 and _id=1;
UPDATE `SEQUENCE` SET `COUNT`=-1,CUR_DATE=now() where `ID`=_id and TIMESTAMPDIFF(MONTH,CUR_DATE,now())>0 and _id=2;
UPDATE `SEQUENCE` SET `COUNT`=`COUNT`+1 where `ID`=_id;
SELECT * FROM `SEQUENCE` where `ID`=_id;
END
the SQL in mybatis
<select id="getSequence" parameterType="java.lang.Long" resultMap="baseResult" statementType="CALLABLE">
{call SEQUENCE_PROCEDURE(#{id,jdbcType=BIGINT,mode=IN})}
</select>
the Test in project
#Test
#Transactional
public void testSequence() {
System.out.println(sequenceService.getId(2L));
System.out.println(sequenceService.getId(2L));
System.out.println(sequenceService.getId(2L));
}
where
public String getId(Long id) {
Sequence sequence = sequenceMapper.getSequence(id);
String temp='000000000000'+sequence.getCount();
return temp.substring(temp.length-12);
}
the result of test
000000000000 000000000000 000000000000
expected result
000000000000 000000000001 000000000002
add START TRANSACTION and COMMIT in Procedure do not work!
In your procedure the lines 1 and 2 have specific queries to id 1 and 2 (why?) and double comparison to _id variable. Also the third query runs on another table COUNT, may be is an error.
I changed the procedure assuming the record already exists, and change the third query to SEQUENCE table:
CREATE PROCEDURE SEQUENCE_PROCEDURE(IN _id bigint)
BEGIN
UPDATE `SEQUENCE` SET `COUNT`=-1,CUR_DATE=now()
WHERE `ID`=_id and TIMESTAMPDIFF(DAY,CUR_DATE,now())>0;
UPDATE `SEQUENCE` SET `COUNT`=`COUNT`+1 where `ID`=_id;
SELECT * FROM `SEQUENCE` where `ID`=_id;
END
I have a few table storing daily orders, customers and salespersons. Yet the schema was not well design as columns have inappropriate data value and type, missing index and partition etc. I re-designed a new schema and populate the new tables with the wrecked tables. I am now stuck on populating the daily orders table (with around 10M records).
Attached data definition and the SQL script to populate the table.
TABLE DEFINITION
CREATE TABLE IF NOT EXISTS `testing`.`Orders` (
`order_ID` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`ord_id` BIGINT UNSIGNED NOT NULL,
`create_time` DATETIME NOT NULL,
`create_date` DATE NOT NULL,
`cust_id` MEDIUMINT UNSIGNED NOT NULL,
`cust_mob` BIGINT UNSIGNED NULL,
`sales_id` MEDIUMINT UNSIGNED NULL,
`sales_mob` BIGINT UNSIGNED NULL,
`sales_flag` TINYINT UNSIGNED NULL,
`comm_flag` TINYINT UNSIGNED NULL,
`extraprice` TINYINT UNSIGNED NULL,
PRIMARY KEY (`order_ID`),
INDEX `Date_cust_id` (`create_date` ASC, `cust_id` ASC),
INDEX `Date_cust_mob` (`create_date` ASC, `cust_mob` ASC),
INDEX `Date_dri_id` (`create_date` ASC, `sales_id` ASC),
INDEX `Date_dri_mob` (`create_date` ASC, `sales_mob` ASC),
INDEX `Date_cust` (`create_date` ASC, `cust_id` ASC, `cstu_mob` ASC),
INDEX `Date_dri` (`create_date` ASC, `sales_id` ASC, `sales_mob` ASC),
INDEX `cust` (`cust_id` ASC, `cust_mob` ASC),
INDEX `dri` (`sales_id` ASC, `sales_mob` ASC),
UNIQUE INDEX `ord_id_UNIQUE` (`ord_id` ASC)
)
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8;
This script is to populate the table, involving two left join tables: Pag table with 6xx K record and dri table with 3x k record.
SET SQL_SAFE_UPDATES=0;
SET SQL_MODE='';
DROP PROCEDURE IF EXISTS testing.populate_ord1;
DELIMITER $$
CREATE PROCEDURE testing.populate_ord1()
BEGIN
PREPARE stmt
FROM "
INSERT INTO testing.Orders
SELECT
1
,ord_id
,CASE WHEN TRIM(create_time) ='NULL' THEN NULL ELSE STR_TO_DATE(substring(create_time,1,19), '%Y-%m-%d %H:%i:%s') END AS create_time
,CASE WHEN TRIM(create_time) ='NULL' THEN NULL ELSE DATE(STR_TO_DATE(substring(create_time,1,19), '%Y-%m-%d %H:%i:%s')) END AS create_date
,CASE WHEN TRIM(ord.cust_id) = 'NULL' THEN NULL else pag.cust_id END as cust_id
,CASE WHEN TRIM(ord.mob) = 'NULL' THEN NULL else pag.cust_mob END as cust_mob
,CASE WHEN TRIM(ord.sales_id) = 'NULL' THEN NULL else dri.sales_id END as sales_id
,CASE WHEN TRIM(ord.mob1) = 'NULL' THEN NULL else dri.sales_mob END as sales_mob
,CASE WHEN TRIM(sales_flag) ='NULL' THEN NULL ELSE CONVERT(TRIM(sales_flag),UNSIGNED INTEGER) end AS sales_flag
,CASE WHEN TRIM(comm_flag) ='NULL' THEN NULL ELSE CONVERT(TRIM(comm_flag),UNSIGNED INTEGER) end AS comm_flag
,CASE WHEN TRIM(extraprice) ='NULL' THEN NULL ELSE CONVERT(TRIM(extraprice),UNSIGNED INTEGER) end AS extraprice
FROM testing.ord_table ord
LEFT JOIN
(SELECT cust_id,customer_id,cust_mob FROM testing.Passenger) pag
ON TRIM(ord.customer_id) = TRIM(pag.pag_id)
AND TRIM(ord.mob) = TRIM(pag.passenger_mob)
LEFT JOIN
(SELECT sales_id,salesperson_id,sales_mob FROM testing.sales) dri
ON TRIM(ord.salesperson_id) = TRIM(dri.sales_id)
AND TRIM(ord.mob1) = TRIM(dri.sales_mob)
WHERE ord_id != 'NULL' AND create_time IS NOT NULL AND create_time != 'NULL' AND YEAR(create_time) = ? AND MONTH(create_time) = ? AND DAY(create_time) = ?
GROUP BY ord_id
ON DUPLICATE KEY UPDATE ord_id = ord_id
;
";
SET #y = 2014, #m = 9, #d = 1;
WHILE #y<= 2014 DO
WHILE #m<= 12 DO
SET #d = 1;
WHILE #d<= 31 DO
EXECUTE stmt USING #y, #m, #d;
SET #d = #d + 1;
END WHILE;
SET #m = #m + 1;
END WHILE;
SET #y = #y + 1;
SET #m = 1;
END WHILE;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
set autocommit=0;
call testing.populate_ord1();
COMMIT;
I have failed to populate any record to the table. Sometimes it raises lock wait timeout error or data type error or simply takes too long time (2 days) I suspect it is even doing any job.
I searched the web a bit and have added the following settings to my.cnf.
innodb_autoinc_lock_mode = 2
innodb_lock_wait_time_out = 150
innodb_flush_log_at_trx_commit =2
innodb_buffer_pool_size = 14G
Would anyone advise on how I could accomplish the same task efficiently? The code above run without any syntax error. And in case if there is any naming confusion, please let me know if that's critical to get clarified as I am slightly tweaked those variable tables.
Start by performing
UPDATE ... SET
comm_flag = TRIM(comm_flag),
sales_flag = TRIM(sales_flag),
...
That will speed up the subsequent queries some, and simplify them.
Then avoid using LEFT JOIN ( SELECT ... FROM x WHERE ... ). Instead, see if you can turn that into LEFT JOIN x ON ... WHERE .... That is likely to help.
It is usually a bad idea to split a DATE and TIME into two columns. Or do you have a good argument for such? Let's see the queries that touch that pair of columns.
There is no need for STR_TO_DATE() if the string is already a properly formatted DATE or DATETIME. That is, a string works just fine.
Once the TRIM is out of the way, CONVERT(TRIM(comm_flag),UNSIGNED INTEGER) can be simply comm_flag.
Don't loop through things a day at a time -- the way you have it structured, it will be doing a full table scan! About 1000 times !! (This is likely to be the biggest performance issue.)
Why does STRAIGHT_JOIN consume more CPU than a regular join? Do you have any idea?
When i use straight_join on one of my queries, it speeds up the query like from 12 seconds to 3 seconds. But it consumes so much CPU? Might it be about server configuration or something else?
You might want to check the code after this comment / Topic Ids are OK, Getting Devices... /
Before this line there are some code about filling topic_ids to a temp table.
Here is the query:
CREATE PROCEDURE `DevicesByTopic`(IN platform TINYINT, IN application TINYINT, IN topicList TEXT, IN page_no MEDIUMINT UNSIGNED)
BEGIN
DECLARE m_index INT DEFAULT 0;
DECLARE m_topic VARCHAR(255);
DECLARE m_topic_id BIGINT UNSIGNED DEFAULT NULL;
DECLARE m_session_id VARCHAR(40) CHARSET utf8 COLLATE utf8_turkish_ci;
-- Session Id
SET m_session_id = replace(uuid(), '-', '');
-- Temp table
CREATE TEMPORARY TABLE IF NOT EXISTS tmp_topics(
topic_slug VARCHAR(100) COLLATE utf8_turkish_ci
,topic_id BIGINT UNSIGNED
,session_id VARCHAR(40) COLLATE utf8_turkish_ci
,INDEX idx_tmp_topic_session_id (session_id)
,INDEX idx_tmp_topic_id (topic_id)
) CHARSET=utf8 COLLATE=utf8_turkish_ci;
-- Filling topics in a loop
loop_topics: LOOP
SET m_index = m_index + 1;
SET m_topic_id = NULL;
SET m_topic= SPLIT_STR(topicList,',', m_index);
IF m_topic = '' THEN
LEAVE loop_topics;
END IF;
SELECT t.topic_id INTO m_topic_id FROM topic AS t WHERE t.application = application AND (t.slug_hashed = UNHEX(MD5(m_topic)) AND t.slug = m_topic) LIMIT 1;
-- Fill temp table
IF m_topic_id IS NOT NULL AND m_topic_id > 0 THEN
INSERT INTO tmp_topics
(topic_slug, topic_id, session_id)
VALUES
(m_topic, m_topic_id, m_session_id);
END IF;
END LOOP loop_topics;
/* Topic Ids are OK, Getting Devices... */
SELECT
dr.device_id, dr.platform, dr.application, dr.unique_device_id, dr.amazon_arn
FROM
device AS dr
INNER JOIN (
SELECT STRAIGHT_JOIN
DISTINCT
d.device_id
FROM
device AS d
INNER JOIN
device_user AS du ON du.device_id = d.device_id
INNER JOIN
topic_device_user AS tdu ON tdu.device_user_id = du.device_user_id
INNER JOIN
tmp_topics AS tmp_t ON tmp_t.topic_id = tdu.topic_id
WHERE
((platform IS NULL OR d.platform = platform) AND d.application = application)
AND d.page_no = page_no
AND d.status = 1
AND du.status = 1
AND tmp_t.session_id = m_session_id COLLATE utf8_turkish_ci
) dFiltered ON dFiltered.device_id = dr.device_id
WHERE
((platform IS NULL OR dr.platform = platform) AND dr.application = application)
AND dr.page_no = page_no
AND dr.status = 1;
-- Delete rows fFill temp table
DELETE FROM tmp_topics WHERE session_id = m_session_id;
END;
With the STRAIGHT_JOIN this query takes about 3 seconds but consumes so much CPU like 90%, but if i remove the keyword "STRAIGHT_JOIN", it takes 12 seconds but consume 12% CPU.
MySQL 5.6.19a - innodb
What might be the reason?
Best regards.
A STRAIGHT_JOIN is used when you need to override MySQL's optimizer. You are telling it to ignore its own optimized execution path and instead rely on reading the tables in the order you have written them in the query.
99% of the time you don't want to use a straight_join. Just rely on MySQL to do its job and optimize the execution path for you. After all, any RDBMS worth its salt is going to be pretty decent at optimizing.
The few times you should use a straight_join are when you've already tested MySQL's optimization for a given query and found it lacking. In your case with this query, clearly your manual optimization using straight_join is not better than MySQL's baked in optimization.
I need to check first if the EndTime column in my table is null or not before I can insert another record. If the Endtime column is not null than a new record can be inserted else an error must be thrown. I'm not sure how to create the error in SQL.
This is what I tried but it doesn't work
ALTER PROCEDURE [dbo].[AddDowntimeEventStartByDepartmentID]
(#DepartmentId int,
#CategoryId int,
#StartTime datetime,
#Comment varchar(100) = NULL)
AS
BEGIN TRY
PRINT N'Starting execution'
SET #StartTime = COALESCE(#StartTime, CURRENT_TIMESTAMP);
INSERT INTO DowntimeEvent(DepartmentId, CategoryId, StartTime, EndTime, Comment)
WHERE EndTime = NULL
OUTPUT
inserted.EventId, inserted.DepartmentId,
inserted.CategoryId, inserted.StartTime,
inserted.EndTime, inserted.Comment
VALUES(#DepartmentId, #CategoryId, #StartTime, NULL, #Comment)
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER(),ERROR_MESSAGE()
END CATCH
Here is my table:
CREATE TABLE [dbo].[DowntimeEvent](
[EventId] [int] IDENTITY(0,1) NOT NULL,
[DepartmentId] [int] NOT NULL,
[CategoryId] [int] NOT NULL,
[StartTime] [datetime] NOT NULL,
[EndTime] [datetime] NULL,
[Comment] [varchar](100) NULL,
)
You could use the INSERT...SELECT syntax instead of INSERT...VALUES to be able to use a WHERE clause (with a different condition to the one you tried to use, see below), then check the number of affected rows and raise an error if it is 0:
...
BEGIN TRY
...
INSERT INTO DowntimeEvent
...
SELECT #DepartmentId, #CategoryId, #StartTime, NULL, #Comment
WHERE NOT EXISTS (
SELECT *
FROM dbo.DowntimeEvent
WHERE DepartmentId = #DepartmentId
AND CategoryId = #CategoryId
AND EndTime IS NULL
);
IF ##ROWCOUNT = 0
RAISERROR ('A NULL row already exists!', 16, 1)
;
END TRY
BEGIN CATCH
...
END CATCH;
(Of course, you will need to omit your WHERE clause as invalid Transact-SQL.)
If you want a prevention mechanism at the database level rather than just in your stored procedure, so as to be able to prevent invalid additions from any caller, you may want to consider a trigger.
A FOR INSERT trigger like this would check if new rows violate the rule "Do not add rows newer than the existing NULL row" (as well as "Do not add older rows with empty EndTime") and roll back the transaction if they do:
CREATE TRIGGER DowntimeEvent_CheckNew
ON dbo.DowntimeEvent
FOR INSERT, UPDATE
-- do nothing if EndTime is not affected
IF NOT UPDATE(EndTime)
RETURN
;
-- raise an error if there is an inserted NULL row
-- older than another existing or inserted row
IF EXISTS (
SELECT *
FROM dbo.DowntimeEvent AS t
WHERE t.EndTime IS NULL
AND EXISTS (
SELECT *
FROM inserted AS i
WHERE i.DepartmentId = t.DepartmentId
AND i.CategoryId = t.CategoryId
AND i.StartTime >= t.StartTime
)
)
BEGIN
RAISERROR ("An attempt to insert an older NULL row!", 16, 1);
ROLLBACK TRANSACTION;
END;
-- raise an error if there is an inserted row newer
-- than the existing NULL row or an inserted NULL row
IF EXISTS (
SELECT *
FROM inserted AS i
WHERE i.EndTime IS NULL
AND EXISTS (
SELECT *
FROM dbo.DowntimeEvent AS t
WHERE t.DepartmentId = i.DepartmentId
AND t.CategoryId = i.CategoryId
AND t.StartTime >= i.StartTime
)
)
BEGIN
RAISERROR ("An older NULL row exists!", 16, 1);
ROLLBACK TRANSACTION;
END;
Note that merely issuing ROLLBACK TRANSACTION in a trigger already implies raising a level 16 error like this:
Msg 3609, Level 16, State 1, Line nnn
The transaction ended in the trigger. The batch has been aborted.
so, you may not need your own. There would be a difference in the meaning of Line nnn between the message above and the one brought by your own RAISERROR, however: the line number in the former would refer to the location of the triggering statement, whereas the line number in the latter would refer to a position in your trigger.
Alright, so I have a table that has a count field that is between 0 and 100. Up to six of these fields are tied to a single id. I need to run an update that will decrease each of these rows by a different random number between 1 and 3.
I know I can get my random value with:
CAST(RAND() * 3 AS UNSIGNED)
And I know I can get my update to work with:
UPDATE Info SET Info.count = CASE WHEN Info.count < 2 THEN 0 ELSE Info.count - 2 END WHERE Info.id = $iid AND Info.type = 'Active';
(This just makes sure I will never dip below 0)
But I cannot combine them for the obvious reason that my random number will be different when evaluated then when it's set...
UPDATE Info SET Info.count = CASE WHEN Info.count < CAST(RAND() * 3 AS UNSIGNED) THEN 0 ELSE Info.count - CAST(RAND() * 3 AS UNSIGNED) END WHERE Info.id = $iid AND Info.type = 'Active';
Now I don't want to save just 1 variable, because I may need up to 6 different numbers...is there a way to do what I want to do in a single query? I know how I can do it in multiple, but I really shouldn't be running up to 6 queries every time I need to update one of these blocks...
The table structure I'm working off of is:
CREATE TABLE `Info` (
`id` int(40) DEFAULT NULL,
`count` int(11) DEFAULT NULL,
`type` varchar(40) DEFAULT NULL,
`AUTOINC` int(11) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`AUTOINC`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8;
The AUTOINC field at the moment is just so I can easily verify what's going on during testing.
Thanks!
You're probably best off writing a procedure to do this. So you could write something to the extent of (pseudo code)
create procedure update()
begin
declare id, count, sub int;
declare c cursor for select id, count floor(1+rand()*3) from info
where type='Active';
open c;
loop
fetch c into id, count, sub;
update info set case count - sub < 0 then 0 else count - sub end
where id = id;
end loop;
close c;
end
//
Or you can change the procedure to accept an ID like I had it before and simply use one select and one update statement.