First, this is OpenCart
I have two tables:
1. oc_product (product_id, model, price, event_start, event_end and etc.)
2. oc_product_to_category (product_id, category_id)
Every product has Start Date and End Date. I created MYSQL event that catch every product with expired date (event_end < NOW()) to store it in category "Archive" with id = 68
Here is the code of MYSQL EVENT
CREATE EVENT move_to_archive_category
ON SCHEDULE EVERY 1 MINUTE
STARTS NOW()
DO
INSERT INTO `oc_product_to_category` (product_id, category_id)
SELECT product_id, 68 as category_id
FROM oc_product p WHERE p.event_end < NOW() AND p.event_end <> '0000-00-00';
When event starts it works properly! BUT, when I got to administration and publish new product with expired date I'm waiting 1 minute to see the product in "Archive" category but nothing happens.
I saw in "SHOW PROCESSLIST" and everything is OK:
event_scheduler localhost NULL Daemon 67 Waiting for next activation NULL
and also "SHOW EVENTS" looks good
Db Name Definer Time zone Type Execute at Interval value Interval field Starts Ends Status Originator character_set_client collation_connection Database Collation
events move_to_archive_category root#localhost SYSTEM RECURRING NULL 1 MINUTE 2016-08-15 13:37:54 NULL ENABLED 1 utf8 utf8_general_ci utf8_general_ci
I'm working locally, not live
Any ideas?
Thanks in advance! :)
I suggest turning on the sonar. I have 3 event links hanging off my profile page. So I created a few helper tables (that can also be seen in those links) to assist is turning on the sonar to see what is up in your events. Note you can expand on it for performance tracking as I did in those links.
Remember that Events succeed or fail (in your mind) based on the data and they do so silently. But tracking what is going on, you can vastly increase you happiness level when developing in them.
Event:
DROP EVENT IF EXISTS move_to_archive_category;
DELIMITER $$
CREATE EVENT move_to_archive_category
ON SCHEDULE EVERY 1 MINUTE STARTS '2015-09-01 00:00:00'
ON COMPLETION PRESERVE
DO
BEGIN
DECLARE incarnationId int default 0;
DECLARE evtAlias varchar(20);
SET evtAlias:='move_2_archive';
INSERT incarnations(usedBy) VALUES (evtAlias);
SELECT LAST_INSERT_ID() INTO incarnationId;
INSERT EvtsLog(incarnationId,evtName,step,debugMsg,dtWhenLogged)
SELECT incarnationId,evtAlias,1,'Event Fired, begin looking',now();
INSERT INTO `oc_product_to_category` (product_id, category_id)
SELECT product_id, 68 as category_id
FROM oc_product p WHERE p.event_end < NOW() AND p.event_end <> '0000-00-00';
-- perhaps collect metrics for above insert and use that in debugMsg below
-- perhaps with a CONCAT into a msg
INSERT EvtsLog(incarnationId,evtName,step,debugMsg,dtWhenLogged)
SELECT incarnationId,evtAlias,10,'INSERT finished',now();
-- pretend there is more stuff
-- ...
-- ...
INSERT EvtsLog(incarnationId,evtName,step,debugMsg,dtWhenLogged)
SELECT incarnationId,evtAlias,99,'Event Finished',now();
END $$
DELIMITER ;
Tables:
create table oc_product_to_category
( product_id INT not null,
category_id INT not null
);
create table oc_product
( product_id INT not null,
event_end datetime not null
);
drop table if exists incarnations;
create table incarnations
( -- NoteA
-- a control table used to feed incarnation id's to events that want performance reporting.
-- The long an short of it, insert a row here merely to acquire an auto_increment id
id int auto_increment primary key,
usedBy varchar(50) not null
-- could use other columns perhaps, like how used or a datetime
-- but mainly it feeds back an auto_increment
-- the usedBy column is like a dummy column just to be fed a last_insert_id()
-- but the insert has to insert something, so we use usedBy
);
drop table if exists EvtsLog;
create table EvtsLog
( id int auto_increment primary key,
incarnationId int not null, -- See NoteA (above)
evtName varchar(20) not null, -- allows for use of this table by multiple events
step int not null, -- facilitates reporting on event level performance
debugMsg varchar(1000) not null,
dtWhenLogged datetime not null
-- tweak this with whatever indexes your can bear to have
-- run maintenance on this table to rid it of unwanted rows periodically
-- as it impacts performance. So, dog the rows out to an archive table or whatever.
);
Turn on Events:
show variables where variable_name='event_scheduler'; -- OFF currently
SET GLOBAL event_scheduler = ON; -- turn her on
SHOW EVENTS in so_gibberish; -- confirm
Confirm Evt is firing:
SELECT * FROM EvtsLog WHERE step=1 ORDER BY id DESC; -- verify with our sonar
For more details of those helper tables, visit those links off my profile page for Events. Pretty much just the one link for Performance Tracking and Reporting.
You will also note that it is of no concern at the moment of having any data in the actual tables that you were originally focusing on. That can come later, and can be reported on in the evt log table by doing a custom string CONCAT into a string variable (for the counts etc). And reporting that in a step # like step 10 or 20.
The point is, you are completely blind without something like this as to know what is going on.
So,
I saw in mysqlog the following errors
160816 10:18:00 [ERROR] Event Scheduler: [root#localhost][events.move_to_archive_category] Duplicate entry '29-68' for key 'PRIMARY'
160816 10:18:00 [Note] Event Scheduler: [root#localhost].[events.move_to_archive_category] event execution failed.
and I just add INGORE in SQL INSERT... so the finally result is
INSERT IGNORE INTO `oc_product_to_category` (product_id, category_id)
Related
I've recently begun learning SQL and currently working on a project tracking the progress of the vaccination rollout. I'm trying to make an event which will automatically tally up the number of patients who have received both vaccine doses at a certain time each day.
This is what I've got so far. The event should return the timestamp and total number of second doses given (as defined in the patient_vaccine_history table), and add these entries to the vaccinated_tally table.
Workbench is kindly telling me that "COUNT" is not valid in line 17.
SET GLOBAL event_scheduler = ON;
CREATE TABLE vaccinated_tally(
ID INT NOT NULL AUTO_INCREMENT,
last_update TIMESTAMP,
fully_vaccinated_tally INT,
PRIMARY KEY (ID));
DELIMITER //
CREATE EVENT daily_tally
ON SCHEDULE AT NOW() + INTERVAL 1 SECOND
DO BEGIN
INSERT INTO vaccinated_tally(last_update)
VALUES (NOW());
INSERT INTO vaccinated_tally(fully_vaccinated_tally)
VALUES COUNT(pvh.nhs_number) -- this is the problem line
FROM patient_vaccine_history pvh
WHERE pvh.dose = 2
);
END//
DELIMITER ;
You would seem to want insert . . . select:
INSERT INTO vaccinated_tally (fully_vaccinated_tally)
SELECT COUNT(pvh.nhs_number) -- this is the problem line
FROM patient_vaccine_history pvh
WHERE pvh.dose = 2;
You are not inserting the timestamp. Perhaps you have a trigger or other mechanism for setting it.
Good evening,
I have 1 table where i store a element with the type : datetime,
I am going to insert a date into this element by a script on php.
When the date is reached i want to increase "datepast" from table2,
we can do it by comparing "name" from table1 with "person_name" from table2.
Now the question is how to trigger a sql script to do this job for me, it would be great if it was real time.
Already thanks,
create table if not exists table1 (
name varchar(64) NOT NULL,
finishtime DATETIME,
id_table1 int NOT NULL AUTO_INCREMENT,
primary key ( id_table1 ));
create table if not exists table2 (
person_name varchar(64) NOT NULL,
datepast int,
id_table1 int NOT NULL AUTO_INCREMENT,
primary key ( id_table1 ));
Cronjobs and MySQL events can do a good job handling such things. But the queries you put in there must be set up to be idempotent -- that is, they must be set up so if you run them more than once they have the same effect as running them once. Otherwise you will have a brittle solution.
When you're handling data based on expiration times like your finishtime, it's usually a good idea to try to use a query or a view, rather than an update.
For example you could create this view
CREATE OR REPLACE VIEW table2 AS
SELECT name AS person_name
COUNT(*) AS datepast
FROM table1
WHERE datepast <= NOW()
GROUP BY name
Then, SELECT * FROM table2 will generate a result set just like SELECT person_name, datepast FROM table2 might generate. But the SELECT resultset will always be precisely accurate in time.
Wait! you say, isn't that inefficient? The answer is, probably not unless you have several hundred thousand rows or more in your table. SQL is built for this kind of declarative data stuff.
You can use MySQL events or a Cronjob. It is not real-time, but it can be close to.
Because your posting is not complete and table2 is missing, I can only give you an example how to setup an event. Inside DO BEGIN and END $$ you can add your MySQL query to update datepast.
DELIMITER $$
CREATE
EVENT `increase_date_past`
ON SCHEDULE EVERY 30 SECOND
DO BEGIN
{{READ DATE AND UPDATE TABLE2}}
END $$
DELIMITER ;
Events are not activated by default, you have to go to
In /etc/mysql/my.cnf
and activate events by placing a line inside [mysqld] tag.
[mysqld]
event_scheduler=1
and then sudo service mysql restart
I have an auction application with these two tables (this is highly simplified, obviously):
create table auctions (
auction_id int,
end_time datetime
);
create table bids (
bid_id int,
auction_id int,
user_id int,
amount numeric,
bid_time timestamp,
constraint foreign key (auction_id) references auctions (auction_id)
);
I don't want bids on an auction after that auction has ended. In other words, rows in the bids table should be allowed only when the bid_time is earlier than the end_time for that auction. What's the simplest way to do this in MySQL?
Ufortunately MySQL does not have a CHECK constraint feature. But You should be able to enforce this using a trigger. However, MySQL trigger support isn't as advanced or well optimized as it is in other RDBMS-es, and you will suffer a considerable performance hit if you do it this way. So if this is a real-time trading system with massive amounts of concurrent users, you should look for another solution.
CREATE TRIGGER bir_bids
BEFORE INSERT ON bids
FOR EACH ROW
BEGIN
DECLARE v_end_time datetime;
-- declare a custom error condition.
-- SQLSTATE '45000' is reserved for that.
DECLARE ERR_AUCTION_ALREADY_CLOSED CONDITION FOR SQLSTATE '45000';
SELECT end_time INTO v_end_time
FROM auctions
WHERE auction_id = NEW.auction_id;
-- the condition is written like this so that a signal is raised
-- only in case the bid time is greater than the auction end time.
-- if either bid time or auction end time are NULL, no signal will be raised.
-- You should avoid complex NULL handling here if bid time or auction end time
-- must not be NULLable - simply define a NOT NULL column constraint for those cases.
IF NEW.bid_time > v_end_time THEN
SIGNAL ERR_AUCTION_ALREADY_CLOSED;
END IF;
END:
Note that the SIGNAL syntax is available only since MySQL 5.5 (currently GA). Triggers are available since MySQL 5.0. So if you need to implement this in a MySQL version prior to 5.5, you need to hack your way around not being able to raise a signal. You can do that by causing some change in the data that will guarantee the INSERT to fail. For instance you could write:
IF NEW.bid_time > v_end_time THEN
SET NEW.auction_id = NULL;
END IF;
Since acution_id is declared NOT NULL in the table, the state of the data will be such that it cannot be inserted. The drawback is that you will get a NOT NULL constraint violation, and the application will have to guess whether this is due to this trigger firing or due to a "real" NOT NULL constraint violation.
For more information, see: http://rpbouman.blogspot.nl/2009/12/validating-mysql-data-entry-with_15.html and http://rpbouman.blogspot.nl/2006/02/dont-you-need-proper-error-handling.html
Insert into bids (auction_id, user_id, amount, bid_time)
Select auction_id, [USER_ID], [AMOUNT], [TIMESTAMP]
From auctions
WHERE UNIX_TIMESTAMP() <= UNIX_TIMESTAMP(end_time)
Of course you have to replace the '[]' values
instead do a simple thing create a column name status. set its type to enum. when you want to block it update its value to 0. default should be 1 means open. 0 means closed
I have a view in mysql which is made of three tables unioned together:
CREATE VIEW program_operator_jct_view AS
select
program_id,
operator_code,
'PROGRAM_OPERATOR' AS type
from
program_operator_jct
UNION
(select
program_id,
operator_code,
'PROGRAM_GROUP' AS type
from
program_operator_group_jct pg_jct,
operator_group_jct og_jct
where
pg_jct.group_id = og_jct.group_id)
From this view, I create a summary table for increased performance, which is indexed so my results from this summary table can be returned via covering indexes:
CREATE TABLE `program_operator_jct_summary` (
`program_id` int(7) NOT NULL,
`operator_code` varchar(6) NOT NULL,
PRIMARY KEY (`program_id`,`operator_code`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
//BUILD SUMMARY PROCEDURE
delimiter //
CREATE PROCEDURE update_program_operator_jct_summary ()
BEGIN
DELETE FROM program_operator_jct_summary;
INSERT INTO program_operator_jct_summary select DISTINCT program_id, operator_code from program_operator_jct_view;
INSERT INTO trigger_record (name) VALUES ('update_program_operator_jct_summary');
END
//
I attached this procedure to the insert, update and delete triggers of the underlining tables which make up the summary table:
-program_operator_jct
-program_operator_group_jct
-operator_group_jct
EXAMPLE:
delimiter //
CREATE TRIGGER trigger_program_operator_jct_insert AFTER INSERT ON program_operator_jct
FOR EACH ROW
BEGIN
CALL update_program_operator_jct_summary ();
END
//
Here's my problem when I add (5) operators to the program_operator_jct:
INSERT INTO program_operator_jct (program_id, operator_code) VALUES
('112', '000500'),
('112', '000432'),
('112', '000501'),
('112', '000264'),
('112', '000184')
This trigger runs (5) times, if I add 100 operators this trigger runs 100 times. This is a nice place to use triggers because I don't have to worry about the summary table being out of date with the underlining tables.
However rebuilding a summary table for each value in an extended inserts is way too much of a performance hit (sometimes I add hundreds of operators to programs at a time). I want the trigger to run once after the extended inserts are performed on the underlining tables. Is this possible?
The trigger is doing its job, e.g. 'FOR EACH ROW'.
I don't believe that mysql gives you the option of running a trigger once at the end.
I'd call the stored procedure from your code after the INSERT has successfully completed.
If you're worried about forgetting, setup a cron job to run it every once in a while.
Good luck.
I'm writing a stored procedure to update a table:
UPDATE st SET somedate = NOW();
The client of the SP must know the exact date and time generated by the NOW function.
There are two options:
1) the client passes an input parameter (called _now) to the SP giving it the current date and time
UPDATE st SET somedate = _now;
2) the SP returns back the NOW's output to the client into an out parameter
UPDATE st SET somedate = NOW();
SELECT somedate FROM st INTO _now;
What do you think is the best option?
Are other options possible?
varnow = now()
UPDATE st set somedate = varnow
return varnow
i would do something like this:
drop table if exists users;
create table users
(
user_id int unsigned not null auto_increment primary key,
username varchar(32) unique not null,
created_date datetime not null
)
engine=innodb;
delimiter ;
drop procedure if exists insert_user;
delimiter #
create procedure insert_user
(
in uname varchar(32)
)
proc_main:begin
declare id int unsigned default null;
declare created datetime default null;
set created = now();
insert into users (username, created_date) values (uname, created);
set id = last_insert_id();
-- use id elsewhere maybe...
select id as user_id, created as created_date;
end proc_main #
delimiter ;
call insert_user('f00');
call insert_user('bar');
select * from users;
I suspect that both approaches are wrong.
client of the SP must know the exact date and time
Why? I suspect you really men that the client must be able to identify the records affected by a transaction - but using a timestamp to do that will not be accurate. And its not just a transaction spanning more than 1 second that is the problem. Potentially two such operations may occur in the same second.
If you've got a set of records which you need to identify as belonging to some group then that must be expressed in the schema - the timestamp of the most transaction is obviously not reliable even assuming that you never have further updates on the table.
Add another column or another table and generate a surrogate key to describe the transaction.
C.