I have this table
CREATE TABLE `pcodes` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`product_id` int(10) unsigned NOT NULL,
`code` varchar(100) NOT NULL,
`used` int(10) unsigned NOT NULL DEFAULT '0',
`created_at` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` datetime DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
)
and an insert command is the following:
INSERT INTO `pcodes` (`product_id`, `code`) VALUES ('1', 'test2');
The table contains random codes for each product_id. I want to get one unused code randomly (LIMIT 1 is ok for the job), mark the code as used and return it to the next layer.
So far I did this:
SELECT * FROM pcodes where product_id=1 and used=0 LIMIT 1
UPDATE pcodes SET used= 1 WHERE (id = 2);
but this does not work well when multiple threads request the first unused code. What is the optimal solution to do this query? I would like to avoid stored procedures.
Possible solution.
Assumes that there aren't other predefined values stored in used column except 0 and 1.
CREATE PROCEDURE select_one_random_row (OUT rowid BIGINT)
BEGIN
REPEAT
UPDATE pcodes SET used = CONNECTION_ID() WHERE used = 0 LIMIT 1;
SELECT id INTO rowid FROM pcodes WHERE used = CONNECTION_ID();
UNTIL rowid IS NOT NULL END REPEAT;
UPDATE pcodes SET used = 1 WHERE used = CONNECTION_ID();
END
To prevent indefinite loop (for example when no rows with used=0) add some counter which increments in REPEAT cycle and breaks it after some reasonable iteration attempts.
The code may be converted to FUNCTION which returns selected rowid.
It is possible that the procedure/function fails (by some external reasons), and a row will stay "selected by current CONNECTION_ID()" whereas the connection is broken itself. So you need in service procedure executed by Event Scheduler which will garbage the rows which belongs to non-existed connections and clear their used value back to zero returning such rows to unused pool.
I am trying to create an event in mysql
Schema :
create event alert_2 ON SCHEDULE EVERY 300 SECOND DO
BEGIN
DECLARE current_time DATETIME;
DECLARE attempted INT;
DECLARE completed INT;
DECLARE calc_value DECIMAL;
set #current_time = CONVERT_TZ(NOW(), ##session.time_zone, '+0:00');
select count(uniqueid) as #attempted,SUM(CASE WHEN seconds > 0 THEN 1 ELSE 0 END) as #completed from callinfo where date >= DATE_SUB(#current_time, INTERVAL 300 SECOND) AND date <= #current_time;
SET #calc_value = (ROUND((#completed/#attempted)*100,2);
IF #calc_value <= 10.00 THEN
INSERT INTO report(value1) value (#calc_value);
END IF;
END;
Problem :
Event is not going to creating
Need suggestion :
Is this create any overload on callinfo table ?
If yes,Would you like to suggest any other way to achieve same thing ?
May i create similar but multiple around 50.Will it create huge load on call info table.
Call info schema :
CREATE TABLE `callinfo` (
`uniqueid` varchar(60) NOT NULL DEFAULT '',
`accountid` int(11) DEFAULT '0',
`type` tinyint(1) NOT NULL DEFAULT '0',
`callerid` varchar(120) NOT NULL,
`callednum` varchar(30) NOT NULL DEFAULT '',
`seconds` smallint(6) NOT NULL DEFAULT '0',
`trunk_id` smallint(6) NOT NULL DEFAULT '0',
`trunkip` varchar(15) NOT NULL DEFAULT '',
`callerip` varchar(15) NOT NULL DEFAULT '',
`disposition` varchar(45) NOT NULL DEFAULT '',
`date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`debit` decimal(20,6) NOT NULL DEFAULT '0.000000',
`cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`provider_id` int(11) NOT NULL DEFAULT '0',
`pricelist_id` smallint(6) NOT NULL DEFAULT '0',
`package_id` int(11) NOT NULL DEFAULT '0',
`pattern` varchar(20) NOT NULL,
`notes` varchar(80) NOT NULL,
`invoiceid` int(11) NOT NULL DEFAULT '0',
`rate_cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`reseller_id` int(11) NOT NULL DEFAULT '0',
`reseller_code` varchar(20) NOT NULL,
`reseller_code_destination` varchar(80) DEFAULT NULL,
`reseller_cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`provider_code` varchar(20) NOT NULL,
`provider_code_destination` varchar(80) NOT NULL,
`provider_cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`provider_call_cost` decimal(20,6) NOT NULL,
`call_direction` enum('outbound','inbound') NOT NULL,
`calltype` enum('STANDARD','DID','FREE','CALLINGCARD') NOT NULL DEFAULT 'STANDARD',
`profile_start_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`answer_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`bridge_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`progress_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`progress_media_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`end_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`billmsec` int(11) NOT NULL DEFAULT '0',
`answermsec` int(11) NOT NULL DEFAULT '0',
`waitmsec` int(11) NOT NULL DEFAULT '0',
`progress_mediamsec` int(11) NOT NULL DEFAULT '0',
`flow_billmsec` int(11) NOT NULL DEFAULT '0',
`is_recording` tinyint(1) NOT NULL DEFAULT '1' COMMENT '0 for On,1 for Off'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='callinfo';
ALTER TABLE `callinfo` ADD UNIQUE KEY `uniqueid` (`uniqueid`), ADD KEY `user_id` (`accountid`);
More Information about callinfo table :
In call info table around 20K/hour rercords are inserted.
Please suggest ,If need to apply any indexing in schema to get good performance.
Some suggestions:
user-defined variables (variables named starting with # character) are separate and distinct from local variables
there's no need to declare local variables that aren't referenced
use local variables in favor of user-defined variables
a column alias (identifier) that starts with # character need to be escaped (or MySQL will throw a syntax error)
assigning a column alias (identifier) that looks like a user-defined variable is just a column alias; it is not a reference to a user-defined variable
use SELECT ... INTO to assign scalar values returned from statement into local variables and/or user-defined variables
declaring datatype DECIMAL is equivalent to specifying DECIMAL(10,0)
in INSERT ... VALUES statement the keyword is VALUES not VALUE
best practice is to give local variables names that are different from column names
best practice is to qualify all column references
its a bit odd to insert only a single column, a calculated value, into a table without some other identifying values (it's not illegal. it may be exactly what the specification calls for. it just strikes me as a bit odd. I bring it up in light of the code as written, because appears that the author of the code is not familiar with MySQL.)
using CONVERT_TZ is a bit odd; given that any datetime value referenced in a SQL statement will be interpreted in the current session time zone; we're kind of assuming that the date column is DATETIME datatype, but that's just a guess.
to create a MySQL stored program that contains semicolons, the DELIMITER for the session needs to be changed to character(s) that don't appear in the stored program definition
Rather than address each individual problem in the stored program, I'm going to suggest a revision that does what it looks like the original code was intended to do:
DELIMITER $$
CREATE EVENT alert_2 ON SCHEDULE EVERY 300 SECOND DO
BEGIN
DECLARE ld_current_time DATETIME;
DECLARE ln_calc_value DECIMAL(20,2);
-- DECLARE li_attempted INT;
-- DECLARE li_completed INT;
SET ld_current_time = CONVERT_TZ(NOW(), ##session.time_zone, '+0:00');
SELECT ROUND( 100.0
* SUM(CASE WHEN c.seconds > 0 THEN 1 ELSE 0 END)
/ COUNT(c.uniqueid)
,2) AS calc_value
-- , COUNT(c.uniqueid) AS attempted
-- , SUM(CASE WHEN c.seconds > 0 THEN 1 ELSE 0 END) AS completed
FROM callinfo c
WHERE c.date > ld_current_time + INTERVAL -300 SECOND
AND c.date <= ld_current_time
INTO ln_calc_value
-- , li_attempted
-- , li_completed
;
IF ln_calc_value <= 10.00 THEN
INSERT INTO report ( value1 ) VALUES ( ln_calc_value );
END IF;
END$$
DELIMITER ;
For performance, we want to have an index with date as the leading column
... ON `callinfo` (`date`, ...)
Ideally (for the query in this stored program) the index with the leading column of date would be a covering index (including all of the columns that are referenced in the query), e.g.
... ON `callinfo` (`date`,`seconds`,`uniqueid`)
Q: Is this create any overload on callinfo table ?
Since this runs a query against callinfo table, it will need to obtain shared locks. With an appropriate index available, and assuming that 5 minutes of call info is a smallish set of rows, I wouldn't expect this query to contribute significantly towards performance problems or contention issues. If it does cause a problem, I would expect that this query in this stored program isn't the root cause of the problem, it will only exacerbate a problem that already exists.
Q: If yes,Would you like to suggest any other way to achieve same thing ?
It's difficult to suggest alternatives to achieving a "thing" when we haven't defined the "thing" we are attempting to achieve.
Q: May i create similar but multiple around 50. Will it create huge load on callinfo table.
A: As long as the query is efficient, is selecting a smallish set of rows via a suitable index, and runs quickly, I wouldn't expect that query to create huge load, no.
FOLLOWUP
For optimal performance, we are definitely going to want an index with leading column of date.
I'd remove the reference to uniqueid in the query. That is, replace COUNT(c.uniqueid) with SUM(1). The results from those are equivalent (given that uniqueid is guaranteed to be non-NULL) except in the case of no rows, COUNT() will return 0 and SUM() will return NULL.
Since we're dividing by that expression, in the case of "no rows" it's a difference between "divide by zero" and "divide by null". And a "divide by zero" operation will raise an error with some settings of sql_mode. If I divide by COUNT(), I'm going to want to convert a zero to NULL before I do the division
... / NULLIF(COUNT(...),0)
or the more ansi standards compliant
... / CASE WHEN COUNT(...) = 0 THEN NULL ELSE COUNT(...) END
but we can avoid that rigmarole by using SUM(1) instead, then we don't have any special handling for the "divide by zero" case. But what that really buys us is that we are removing the reference to the uniqueid column.
Then a "covering index" for the query will require only two columns.
... ON `callinfo` (`date`,`seconds`)
(i.e. EXPLAIN will show "Using index" in the Extra column, and show "range" for access)
Also, I'm not getting my brain wrapped around the need for CONVERT_TZ.
Lets say i have this table:
CREATE TABLE `offers` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`status` int(11) DEFAULT NULL,
)
and fill it with this record:
INSERT INTO offers (status) VALUES (null);
now run this query :
DELETE FROM offers WHERE STATUS <> 3
I'm expecting the latest record get removed from table but it doesn't.why? and whats the correct way to solve such issues?
In SQL the value NULL is very special! Pretty much any expression that has a NULL value in it will evaluate to NULL. That means that when you do the STATUS<>3, if STATUS is NULL, the result is NULL, which when used directly as a truth value is false. So, the expression WHERE STATUS <> 3 selects rows that have a value other than 3. Rows with the value 3 and rows with no value (i.e. NULL) will not be selected. The only time a comparison of a variable that's NULL can be true is if you use the IS NULL comparison, or related special constructs. You probably want to review the relevant sections of the manual, if you actually want to use NULL in a constructive fashion.
This is answered in MySQL's documentation about NULL. The integer comparison you are doing does not satisfy NULL value. NULL is not treated as a value, it signifies the absence of one. If you want to delete those rows you have to say
DELETE FROM offers WHERE status <> 3 OR status IS NULL
null is not a string it is a keyword in mysql, so use:-
DELETE FROM offers WHERE STATUS is null
I am using MYSQL as database. Check is this table definition
CREATE TABLE `test`.`header`
(
`header_id` BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
`title` VARCHAR(500) NOT NULL,
`body` VARCHAR(5000) NOT NULL,
`created_by_id_ref` BIGINT UNSIGNED NOT NULL,
`created_date` DATETIME NOT NULL,
`updated_date` DATETIME NULL DEFAULT NULL,
`is_void` TINYINT(1) NULL DEFAULT NULL,
PRIMARY KEY (header_id`) ) ENGINE=INNODB CHARSET=latin1 COLLATE=latin1_swedish_ci;
In my interface user can delete any of the record by simply selecting the record from a grid view. So in that case I am simply updating the "is_void" status to true.
I declared that column by this syntax. which shows above. Here it again.
`is_void` TINYINT(1) NULL DEFAULT NULL,
So If I add an index to this column is this column declaration is good?
In that case records default have null values. For voided records it will be "1".
So if I am going to filter any of those records for voided records I have to use
Select ........ where is_void=1;
If I want to filter non voided records I can use
Select ........ where is_void IS NULL;
So is this NULL declaration affect to my select query performance? (Remember I indexed this column)
Or Shall I declare my column as
`is_void` TINYINT(1) NOT NULL,
and then I have insert "0" for non voided records. Then if I want to filter non voided records I can use
Select ........ where is_void=0;
So What is the best?
where is_void=0; or where is_void IS NULL;
Thank you very much.
In terms of performances, both approaches are equivalent*.
In terms of logic, NULL is widely regarded as meaning "unknown value" or "not applicable". Go with 1 or 0, as defined by the TRUE and FALSE constants.
Also, even though MySQL implements it as an alias for TINYINT(1), I would advise using the ANSI-standard BOOLEAN type.
* Ok, this is not entirely true in theory.
Does anyone know if its possible to implement a row constraint in MySQL?
Lets say i have a very simple table for maintaining promotion codes for a webshop:
CREATE TABLE `promotioncodes` (
`promotionid` int(11) NOT NULL AUTO_INCREMENT,
`promotioncode` varchar(30) DEFAULT NULL,
`partnerid` int(11) DEFAULT NULL,
`value` double DEFAULT NULL,
`nr_of_usage` int(11) NOT NULL DEFAULT '0',
`valid_from` datetime DEFAULT '0000-00-00 00:00:00',
`valid_through` datetime DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`promotionid`),
UNIQUE KEY `promotioncode_unique` (`promotioncode`)
)
Is there any way to make sure that when a row gets inserted or updated:
1) The 'valid_from' date is always 'smaller' (in terms of dates) then 'valid_through'.
2) If 'valid_from' happens to be left blank/null 'valid_through' must be blank/null too.
3) If 'valid_through' happens to be left blank/null 'valid_from' must be blank/null too.
I readed some stuff about triggers and stored procedures but i dont have the feeling these are the solution :/ ? If they are the solution then please give me a concrete example of how to implement this
You should create stored procedure that will insert data. In this stored procedure you can implement any rules which you need.
Here is the example.
create procedure promotioncodes_insert(
IN... list of params
OUT error_code
)
exec:begin
set error_code = 0; -- everything is ok
if valid_from > valid_thru then
set error_code = -1; -- error
leave exec;
end if;
if (valid_from is null) and (valid_thru is not null) then
set error_code = -2; -- another kind of error
leave exec;
end if;
-- and so on
-- doing insert
end;
Please note that if you will do direct insert like insert into promocodes() values() these constraints will not work.