MySQL Event Insert ID from another table - mysql

I would like to create an event that when a lend_date column has passed exactly 15 days, it would execute an INSERT query.
It would get the ID of that row and userid, and insert it to another table.
For example:
id | userid | lend_date
---+----------+----------------------
1 | 1 | 2015-09-24 15:58:48
2 | 1 | 2015-09-21 08:22:48
And right now, it is exactly 2015-10-06 08:23:41. So the event should get the ID of the second row, which is 2, and insert it to another table.
What should my event query look like?
The event type is RECURRING. But I'm also not sure if I should execute it every hour or everyday. What would be the best recommendation for it?
And is this a better way than using Task Scheduler?
The other table that I wanted to insert the fetched ID is notification_table, where it will notify the user that he/she has an overdue date.
notification_table looks like this:
id | userid | notificationid | notificationdate |
---+----------+------------------+----------------------+
1 | 1 | 1 | 2015-09-24 15:58:48 |
2 | 1 | 1 | 2015-09-21 08:22:48 |
I'm looking at this query:
INSERT INTO notification_table (id, userid, notificationid, notificationdate)
SELECT id, userid, 1, NOW()
FROM first_table
WHERE lend_date + INTERVAL 15 DAY = NOW();

Seeing the words exactly, event, and datetime in the same sentence makes me cringe. Why? For one thing, it's hard to get one datetime value to exactly match another. For another thing, events often run slightly after the scheduled time, especially on a busy database server. It takes them a little time to start up.
If you need the id values from a table where the records are more than 15 days old, the most time-precise way to get them is with a query or view.
CREATE OR REPLACE VIEW fifteen
AS SELECT id
FROM table
WHERE `datetime` < NOW() - INTERVAL 15 DAY
You can, of course, write an event to copy the ids to a new table. You'll have to go to some trouble to make sure you don't hit the same id values more than once, by using this sort of query in the event.
INSERT INTO newtable (id)
SELECT id
FROM table
WHERE `datetime` < NOW() - INTERVAL 15 DAY
AND id NOT IN (SELECT id FROM newtable)
How often should you run the repeating event? That depends entirely on how quickly the id values need to make it into the new table after they turn fifteen days old. If your application requires it to be less than a minute, you really should go with the view rather than the event. Anything more than a minute of allowable delay will let you use a repeating event at that frequency.

Related

Find the period where the number of occurrences is the highest

Given a table "events_log" in this form :
| id | started_at | duration |
| 1 | 2017-06-01 09:00:00 | 80 |
| 1 | 2017-06-01 09:01:00 | 40 |
| 1 | 2017-06-01 09:01:23 | 20 |
I want to know when the most events were occuring (with a minute precision) :
|period |count|
| 2017-06-01 09:00:00 | 1 |
| 2017-06-01 09:01:00 | 3 |
In reality, there a millions of events to handle.
My solution is to :
Create a temporary table with event start grouped by minute
LEFT JOINing it with the events between each period
See http://sqlfiddle.com/#!9/8546a/1
But performance is terrible ...
Is there a better way to do it ?
I would think group by, something like this:
select date_format(started_at, '%Y-%m-%d %h:%i') as yyyymmddhhmi, count(*)
from t
group by yyyymmddhhmi
order by count(*) desc
limit 10;
Performance will not be great.
Here is modified version of your code. It will scan through events_log table twice. Once when building event_starts helper table and second time when selecting all events that are happening in specified interval. Also note added index, that will significantly speed up execution. It might be also reason why your original query was so slow.
CREATE TABLE events_log (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,started_at DATETIME,duration INT(11));
INSERT INTO events_log (started_at, duration) VALUES ('2017-06-01 09:00:00', 80);
INSERT INTO events_log (started_at, duration) VALUES ('2017-06-01 09:01:00', 40);
INSERT INTO events_log (started_at, duration) VALUES ('2017-06-01 09:01:23', 20);
CREATE /* TEMPORARY */ TABLE tmp_event_starts AS (
select DISTINCT DATE_ADD(started_at, INTERVAL -SECOND(started_at) SECOND) AS period_start
from events_log
);
create index idx_tmp_event_starts
on tmp_event_starts (period_start);
select period_start, count(*), group_concat(id) from events_log as log
join tmp_event_starts as per
on per.period_start >= DATE_ADD(started_at, INTERVAL -SECOND(started_at) SECOND)
and per.period_start <= DATE_ADD(started_at, INTERVAL -SECOND(started_at)+duration SECOND)
group by period_start
;
If you have lot of events happening in same minute and there are not minutes without events, then you might consider generating helper table as a sequence of minutes independent of data. In MySql it is quite uneasy task, but some hints can be found in this blog post Calendar Tables: An Invaluable Database Tool
It will also allow for generating helper table in advance and thus speeding up execution of query itself significantly.
You might also consider adding ended_at columnt to your event_log table, which will eliminate need for conversion during query execution.

optimized way to calculate compliance in mysql

I have a table which contains task list of persons. followings are columns
+---------+-----------+-------------------+------------+---------------------+
| task_id | person_id | task_name | status | due_date_time |
+---------+-----------+-------------------+------------+---------------------+
| 1 | 111 | walk 20 min daily | INCOMPLETE | 2017-04-13 17:20:23 |
| 2 | 111 | brisk walk 30 min | COMPLETE | 2017-03-14 20:20:54 |
| 3 | 111 | take medication | COMPLETE | 2017-04-20 15:15:23 |
| 4 | 222 | sport | COMPLETE | 2017-03-18 14:45:10 |
+---------+-----------+-------------------+------------+---------------------+
I want to find out monthly compliance in percentage(completed task/total task * 100) of each person like
+---------------+-----------+------------+------------+
| compliance_id | person_id | compliance | month |
+---------------+-----------+------------+------------+
| 1 | 111 | 100 | 2017-03-01 |
| 2 | 111 | 50 | 2017-04-01 |
| 3 | 222 | 100 | 2017-03-01 |
+---------------+-----------+------------+------------+
Here person_id 111 has 1 task in month 2017-03-14 and which status is completed, as 1 out of 1 task is completed in march then compliance is 100%
Currently, I am using separate table which stores this compliance but I have to calculate compliance update that table every time the task status is changed
I have tried creating a view also but it's taking too much time to execute view almost 0.5 seconds for 1 million records.
CREATE VIEW `person_compliance_view` AS
SELECT
`t`.`person_id`,
CAST((`t`.`due_date_time` - INTERVAL (DAYOFMONTH(`t`.`due_date_time`) - 1) DAY)
AS DATE) AS `month`,
COUNT(`t`.`status`) AS `total_count`,
COUNT((CASE
WHEN (`t`.`status` = 'COMPLETE') THEN 1
END)) AS `completed_count`,
CAST(((COUNT((CASE
WHEN (`t`.`status` = 'COMPLETE') THEN 1
END)) / COUNT(`t`.`status`)) * 100)
AS DECIMAL (10 , 2 )) AS `compliance`
FROM
`task` `t`
WHERE
((`t`.`isDeleted` = 0)
AND (`t`.`due_date_time` < NOW())
GROUP BY `t`.`person_id` , EXTRACT(YEAR_MONTH FROM `t`.`due_date_time`)
Is there any optimized way to do it?
The first question to consider is whether the view can be optimized to give the required performance. This may mean making some changes to the underlying tables and data structure. For example, you might want indexes and you should check query plans to see where they would be most effective.
Other possible changes which would improve efficiency include adding an extra column "year_month" to the base table, which you could populate via a trigger. Another possibility would be to move all the deleted tasks to an 'archive' table to give the view less data to search through.
Whatever you do, a view will always perform worse than a table (assuming the table has relevant indexes). So depending on your needs you may find you need to use a table. That doesn't mean you should junk your view entirely. For example, if a daily refresh of your table is sufficient, you could use your view to help:
truncate table compliance;
insert into compliance select * from compliance_view;
Truncate is more efficient than delete, but you can't use a rollback, so you might prefer to use delete and top-and-tail with START TRANSACTION; ... COMMIT;. I've never created scheduled jobs in MySQL, but if you need help, this looks like a good starting point: here
If daily isn't often enough, you could schedule this to run more often than daily, but better options will be triggers and/or "partial refreshes" (my term, I've no idea if there is a technical term for the idea.
A perfectly written trigger would spot any relevant insert/update/delete and then insert/update/delete the related records in the compliance table. The logic is a little daunting, and I won't attempt it here. An easier option would be a "partial refresh" on called within a trigger. The trigger would spot user targetted by the change, delete only the records from compliance which are related to that user and then insert from your compliance_view the records relating to that user. You should be able to put that into a stored procedure which is called by the trigger.
Update expanding on the options (if a view just won't do):
Option 1: Daily full (or more frequent) refresh via a schedule
You'd want code like this executed (at least) daily.
truncate table compliance;
insert into compliance select * from compliance_view;
Option 2: Partial refresh via trigger
I don't work with triggers often, so can't recall syntax, but the logic should be as follows (not actual code, just pseudo-code)
AFTER INSERT -- you may need one for each of INSERT / UPDATE / DELETE
FOR EACH ROW -- or if there are multiple rows and you can trigger only on the last one to be changed, that would be better
DELETE FROM compliance
WHERE person_id = INSERTED.person_id
INSERT INTO compliance select * from compliance_view where person_id = INSERTED.person_id
END
Option 3: Smart update via trigger
This would be similar to option 2, but instead of deleting all the rows from compliance that relate to the relevant person_id and creating them from scratch, you'd work out which ones to update, and update them and whether any should be added / deleted. The logic is a little involved, and I'm not not going to attempt it here.
Personally, I'd be most tempted by Option 2, but you'd need to combine it with option 1, since the data goes stale due to the use of now().
Here's a similar way of writing the same thing...
Views are of very limited benefit in MySQL, and I think should generally be avoided.
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(task_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
,person_id INT NOT NULL
,task_name VARCHAR(30) NOT NULL
,status ENUM('INCOMPLETE','COMPLETE') NOT NULL
,due_date_time DATETIME NOT NULL
);
INSERT INTO my_table VALUES
(1,111,'walk 20 min daily','INCOMPLETE','2017-04-13 17:20:23'),
(2,111,'brisk walk 30 min','COMPLETE','2017-03-14 20:20:54'),
(3,111,'take medication','COMPLETE','2017-04-20 15:15:23'),
(4,222,'sport','COMPLETE','2017-03-18 14:45:10');
SELECT person_id
, DATE_FORMAT(due_date_time,'%Y-%m') yearmonth
, SUM(status = 'complete')/COUNT(*) x
FROM my_table
GROUP
BY person_id
, yearmonth;
person_id yearmonth x
111 2017-03 1.0
111 2017-04 0.5
222 2017-03 1.0

Tracking the user's last activity

I have a table setup for member logins. Right now the last_login field is stamped with MySQL's NOW(). But I want to also track their last active time on the site. And the only way I can think of is to create a new query, insert it into every procedure I have on every page, and update a timestamped last_activity field for the current login. Is there a better way to do this?
Example:
MariaDB [master]> select logintime, last_activity
from memberlogins
where memberid = "1"
order by loginid
desc limit 1;
+---------------------+---------------------+
| logintime | last_activity |
+---------------------+---------------------+
| 2017-02-11 22:28:54 | 2017-02-11 23:48:14 |
+---------------------+---------------------+
That's what I want, to add the last_activity to the table. And the only way I can think of to accomplish this is to add this query:
$stmt = $dbsotp->prepare('UPDATE memberlogins
SET last_activity = NOW()
WHERE memberid = :memberid');
And then the rest of the PDO here. So all I'm asking is if there's a better way to do this than inserting this query into every procedure I have on every page. I have 67 pages with a several hundred procedures, that's why I ask if this is the only way or if there's a better way to go.

Graph per-day from ranges in MySQL

I am trying to make a graph that has a point for each day showing the number of horses present per-day.
This is example of data I have (MySQL)
horse_id | start_date | end_date |
1 | 2011-04-02 | 2011-04-03 |
2 | 2011-04-02 | NULL |
3 | 2011-04-04 | 2014-07-20 |
4 | 2012-05-11 | NULL
So a graph on that data should output one row per day starting on 2011-04-02 and ending on CURDATE, for each day it should return how many horses are registered.
I can't quite wrap my head around how I would do this, since I only have a start date and an end date for each item, and I want to know per-day how many was present on that day.
Right now, I do a loop and a SQL query per day, but that is - as you might have guesses - thousands of queries, and I was hoping it could be done smarter.
If a day between 2011-04-02 and now contains nothing, I still want it out but with a 0.
If possible I would like to avoid having a table with a row for each day containing a count.
I hope it makes sense, I am very stuck here.
What you should have, is a table containing just dates from at least the earliest date in your current table till the current date.
Then you can use this table to left join it something like this:
SELECT
dt.date,
COUNT(yt.horse_id)
FROM
dates_table dt
LEFT JOIN your_table yt ON dt.date BETWEEN yt.start_date AND COALESCE(end_date, CURDATE())
GROUP BY dt.date
Be sure to have a column of your_table in the COUNT() function, otherwise it counts the NULL values too.
The COALESCE() function returns the first of its parameter which isn't NULL, so if you don't have an end_date specified, the current date is taken instead.

creating a series of time periods as rows

I want to write a query that, for any given start date in the past, has as each row a week-long date interval up to the present.
For instance, given the start date of Nov 13th 2010, and the present date of 12-16-2010, I want a result set like
+------------+------------+
| Start | End |
+------------+------------+
| 2010-11-15 | 2010-11-21 |
+------------+------------+
| 2010-11-22 | 2010-11-28 |
+------------+------------+
| 2010-11-29 | 2010-12-05 |
+------------+------------+
| 2010-12-06 | 2010-12-12 |
+------------+------------+
It doesn't go past 12 because the week-long period that the present date occurs in isn't complete.
I can't get a foothold on how I would even start to write this query.
Can I do this in a single query? Or should I use code for looping, and do multiple queries?
It's quite difficult (but not impossible) to create such a result set dynamically in MySQL as it doesn't yet support any of recursive CTEs, CONNECT BY, or generate_series that I would use to do this in other databases.
Here's an alternative approach you can use.
Create and prepopulate a table containing all the possible rows from some date far in the past to some date far in the future. Then you can easily generate the result you need by querying this table with a WHERE clause, using an index to make the query efficient.
The drawbacks of this approach are quite obvious:
It takes up storage space unnecessarily.
If you query outside of the range that you populated your table with you won't get any results, which means that you will either have to populate the table with enough dates to last the lifetime of your application or else you need a script to add more dates every so often.
See also this related question:
How do I make a row generator in MySQL
Beware this is just a concept idea: I do not have a mysql installation right here, so that I cannot test it.
However I would base myself on a table containing the integers, in order to emulate a series.
Something like :
CREATE TABLE integers_table
(
id integer primary key
);
Followed by (warning, this is pseudo code)
INSERT INTO integers_table(0…32767);
(that should be enough weeks for the rest of our lives :-)
Then
FirstMondayInUnixTimeStamp_Utc= 3600 * 24 * 4
SecondPerday=3600 * 24
(since 1 jan 1970 was a thursday. Beware I did not cross check! I might be off a few hours!)
And then
CREATE VIEW weeks
AS
SELECT integers_table.id AS week_id,
FROM_UNIXTIME(FirstMondayInUnixTimeStamp_Utc + week_id * SecondPerDay * 7) as week_start
FROM_UNIXTIME(FirstMondayInUnixTimeStamp_Utc + week_id * SecondPerDay * 7 + SecondPerDay * 6) as week_end;